url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/37721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37721/comments | https://api.github.com/repos/huggingface/transformers/issues/37721/events | https://github.com/huggingface/transformers/pull/37721 | 3,014,537,836 | PR_kwDOCUB6oc6Tn14Z | 37,721 | [tests, `qwen2_5_omni`] fix flaky tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T16:07:10 | 2025-04-23T16:54:15 | 2025-04-23T16:54:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37721",
"html_url": "https://github.com/huggingface/transformers/pull/37721",
"diff_url": "https://github.com/huggingface/transformers/pull/37721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37721.patch",
"merged_at": "2025-04-23T16:54:12"
} | # What does this PR do?
cc @zucchini-nlp
Fixes two `qwen2_5_omni` flaky issues:
1. Typo in the model name in `VLM_CLASS_NAMES` (used to skip tests) -> model was not being skipped -> flaky tests
2. `qwen2_5_omni` has a new config attribute corresponding to a token that can't be used in the output | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37721/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37720/comments | https://api.github.com/repos/huggingface/transformers/issues/37720/events | https://github.com/huggingface/transformers/issues/37720 | 3,014,509,551 | I_kwDOCUB6oc6zrcPv | 37,720 | Quantized int8 model evaluation using TP - only Tensors of floating point dtype can require gradients | {
"login": "amd-xiaoyu12",
"id": 188109516,
"node_id": "U_kgDOCzZSzA",
"avatar_url": "https://avatars.githubusercontent.com/u/188109516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amd-xiaoyu12",
"html_url": "https://github.com/amd-xiaoyu12",
"followers_url": "https://api.github.com/users/amd-xiaoyu12/followers",
"following_url": "https://api.github.com/users/amd-xiaoyu12/following{/other_user}",
"gists_url": "https://api.github.com/users/amd-xiaoyu12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amd-xiaoyu12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amd-xiaoyu12/subscriptions",
"organizations_url": "https://api.github.com/users/amd-xiaoyu12/orgs",
"repos_url": "https://api.github.com/users/amd-xiaoyu12/repos",
"events_url": "https://api.github.com/users/amd-xiaoyu12/events{/privacy}",
"received_events_url": "https://api.github.com/users/amd-xiaoyu12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-23T15:59:17 | 2025-04-24T12:58:19 | 2025-04-24T12:58:18 | CONTRIBUTOR | null | null | null | null | ### System Info
When I test quantized int8 model with TP, the following error occurred: only Tensors of floating point dtype can require gradients
Detail:
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 658, in shard_and_distribute_module
[rank0]: param = tp_layer.partition_tensor(
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 385, in partition_tensor
[rank0]: return nn.Parameter(parameter) #, requires_grad=False)
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parameter.py", line 49, in new
[rank0]: t = data.detach().requires_grad_(requires_grad)
[rank0]: RuntimeError: only Tensors of floating point dtype can require gradient
Solution can be found at draft PR #37719
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Load quantized int8 model "amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test" and evaluate the model using tensor parallelism.
### Expected behavior
The model should run successfully using tensor parallelism. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37720/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37719/comments | https://api.github.com/repos/huggingface/transformers/issues/37719/events | https://github.com/huggingface/transformers/pull/37719 | 3,014,478,010 | PR_kwDOCUB6oc6Tnoj5 | 37,719 | Expand quantized data type support for tensor parallelism | {
"login": "amd-xiaoyu12",
"id": 188109516,
"node_id": "U_kgDOCzZSzA",
"avatar_url": "https://avatars.githubusercontent.com/u/188109516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amd-xiaoyu12",
"html_url": "https://github.com/amd-xiaoyu12",
"followers_url": "https://api.github.com/users/amd-xiaoyu12/followers",
"following_url": "https://api.github.com/users/amd-xiaoyu12/following{/other_user}",
"gists_url": "https://api.github.com/users/amd-xiaoyu12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amd-xiaoyu12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amd-xiaoyu12/subscriptions",
"organizations_url": "https://api.github.com/users/amd-xiaoyu12/orgs",
"repos_url": "https://api.github.com/users/amd-xiaoyu12/repos",
"events_url": "https://api.github.com/users/amd-xiaoyu12/events{/privacy}",
"received_events_url": "https://api.github.com/users/amd-xiaoyu12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T15:50:14 | 2025-05-13T16:01:40 | 2025-04-24T12:34:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37719",
"html_url": "https://github.com/huggingface/transformers/pull/37719",
"diff_url": "https://github.com/huggingface/transformers/pull/37719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37719.patch",
"merged_at": "2025-04-24T12:34:32"
} | # What does this PR do?
Update the tensor parallel for support generic quantized tensor with data type like int8 and scalar tensor.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
- [#37720]
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 658, in shard_and_distribute_module
[rank0]: param = tp_layer.partition_tensor(
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 385, in partition_tensor
[rank0]: return nn.Parameter(parameter) #, requires_grad=False)
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parameter.py", line 49, in new
[rank0]: t = data.detach().requires_grad_(requires_grad)
[rank0]: RuntimeError: only Tensors of floating point dtype can require gradients
- 0-dim tensor not support [:]
Get the scaler value of a 0-dim PySafeSlice [#380]
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37719/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37718/comments | https://api.github.com/repos/huggingface/transformers/issues/37718/events | https://github.com/huggingface/transformers/pull/37718 | 3,014,458,195 | PR_kwDOCUB6oc6TnkHu | 37,718 | [cache] fix `HybridCache` init when `device` is passed | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T15:44:28 | 2025-04-24T12:36:52 | 2025-04-24T12:36:52 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37718",
"html_url": "https://github.com/huggingface/transformers/pull/37718",
"diff_url": "https://github.com/huggingface/transformers/pull/37718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37718.patch",
"merged_at": "2025-04-24T12:36:52"
} | # What does this PR do?
The following example, where we initialize the model with the `device` argument, results in cuda graph skips. It can be traced to a stray change in https://github.com/huggingface/transformers/pull/37007, where the device initialization was incorrectly changed. This PR reverts it.
```py
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3-4b-it",
device="cuda",
torch_dtype=torch.bfloat16
)
output = pipe(text="Write a poem on Hugging Face, the company", max_new_tokens=10)
print(output)
``` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37718/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37717/comments | https://api.github.com/repos/huggingface/transformers/issues/37717/events | https://github.com/huggingface/transformers/pull/37717 | 3,014,450,494 | PR_kwDOCUB6oc6TniZt | 37,717 | check torch 2.7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T15:41:54 | 2025-06-10T11:53:44 | 2025-06-10T11:53:44 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37717",
"html_url": "https://github.com/huggingface/transformers/pull/37717",
"diff_url": "https://github.com/huggingface/transformers/pull/37717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37717.patch",
"merged_at": null
} | # What does this PR do?
check torch 2.7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37717/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37716/comments | https://api.github.com/repos/huggingface/transformers/issues/37716/events | https://github.com/huggingface/transformers/pull/37716 | 3,014,424,852 | PR_kwDOCUB6oc6TncvV | 37,716 | :rotating_light: :rotating_light: Fix custom code saving | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T15:33:24 | 2025-05-26T16:37:32 | 2025-05-26T16:37:30 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37716",
"html_url": "https://github.com/huggingface/transformers/pull/37716",
"diff_url": "https://github.com/huggingface/transformers/pull/37716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37716.patch",
"merged_at": "2025-05-26T16:37:30"
} | Right now, our code checks `self._auto_class` to see if model objects are custom code. This PR aims to make that more stable, and resolve the various custom code attributes into something cleaner. This may surface other bugs, but the end goal is that **save_pretrained() and push_to_hub() correctly save all the relevant modelling files**
TODO:
- [x] Handle saving custom objects even when they were initially loaded remotely via `--`, and add a test
- [x] Test the behaviour when loading and saving only one object class - how broken are the others?
- [x] Add tests for save_pretrained() and push_to_hub() to make sure files are getting saved correctly
- [x] Update the `trust_remote_code` prompt to indicate when a far repo is being used
- [x] Update the `trust_remote_code` prompt for local code (it's currently wrong) | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37716/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37715/comments | https://api.github.com/repos/huggingface/transformers/issues/37715/events | https://github.com/huggingface/transformers/issues/37715 | 3,014,424,328 | I_kwDOCUB6oc6zrHcI | 37,715 | Make `argmax` in `post_process_semantic_segmentation` optional | {
"login": "simonreise",
"id": 43753582,
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonreise",
"html_url": "https://github.com/simonreise",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url": "https://api.github.com/users/simonreise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonreise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonreise/subscriptions",
"organizations_url": "https://api.github.com/users/simonreise/orgs",
"repos_url": "https://api.github.com/users/simonreise/repos",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonreise/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-23T15:33:16 | 2025-05-10T09:37:38 | null | CONTRIBUTOR | null | null | null | null | ### Feature request
Now `post_process_semantic_segmentation` function generates [N, H, W] resized segmentation maps, but sometimes user might need [N, C, H, W] resized class probabilities.
Maybe it's worth adding a bool optional argument (named e.g. `class_proba`) to `post_process_semantic_segmentation` that will control if `argmax` should or shouldn't be performed.
### Motivation
There might be cases when user need not resized segmentation maps of [N, H, W] shape, but resized class probabilities of [N, C, H, W] shape.
Calculating ROC AUC metric or a custom segmentation loss (e. g. one of [SMP Losses](https://smp.readthedocs.io/en/latest/losses.html)) might be the case.
#### More detailed reasoning
As far as I understand (mb I am wrong) applying `outputs.logits.softmax(dim=1)` to raw model outputs won't work for every model, because some models (like DETR or MaskFormer) produce `class_queries_logits` and `mask_queries_logits` that have to be converted to something more segmentation-map-alike first. Also, many models need their outputs to be resized that is also done in `post_process_semantic_segmentation`.
In my case I wanted my geospatial semantic segmentation library to be able to get class probabilities from any HF Transformers semantic segmentation model output and I ended copying post-processing code for DETR and ConditionalDETR, for MaskFormer and Mask2Former, for OneFormer and for other segmentation models separately (because they process raw model outputs in different ways) and merging them into one function (that looked extremely ugly).
<details>
<summary>Warning bad code</summary>
```py
def process_output(self, pred, x):
if (
isinstance(pred, transformers.modeling_outputs.SemanticSegmenterOutput)
or isinstance(pred, transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput)
):
pred = pred.logits
if pred.shape[2:4] != x.shape[2:4]:
pred = torch.nn.functional.interpolate(
pred,
size=x.shape[2:4],
mode="bilinear",
align_corners=False,
)
pred = pred.softmax(dim=1)
elif (
isinstance(pred, transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput)
or isinstance(pred, transformers.models.detr.modeling_detr.DetrSegmentationOutput)
):
class_queries_logits = pred.logits # [batch_size, num_queries, num_classes+1]
masks_queries_logits = pred.pred_masks # [batch_size, num_queries, height, width]
# Remove the null class `[..., :-1]`
# Detr treats 0 as separate class, conditionaldetr treats it as nullclass
if isinstance(pred, transformers.models.detr.modeling_detr.DetrSegmentationOutput):
masks_classes = class_queries_logits.softmax(dim = -1)[..., :-1]
else:
masks_classes = class_queries_logits.softmax(dim=-1)
masks_probs = masks_queries_logits.sigmoid() # [batch_size, num_queries, height, width]
# Semantic segmentation logits of shape (batch_size, num_classes, height, width)
pred = torch.einsum("bqc, bqhw -> bchw", masks_classes, masks_probs)
if pred.shape[2:4] != x.shape[2:4]:
pred = torch.nn.functional.interpolate(
pred,
size=x.shape[2:4],
mode="bilinear",
align_corners=False,
)
pred = pred.softmax(dim=1)
elif (
isinstance(pred, transformers.models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput)
or isinstance(pred, transformers.models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput)
):
class_queries_logits = pred.class_queries_logits # [batch_size, num_queries, num_classes+1]
masks_queries_logits = pred.masks_queries_logits # [batch_size, num_queries, height, width]
masks_classes = class_queries_logits.softmax(dim=-1)[..., :-1]
masks_probs = masks_queries_logits.sigmoid() # [batch_size, num_queries, height, width]
# Semantic segmentation logits of shape (batch_size, num_classes, height, width)
pred = torch.einsum("bqc, bqhw -> bchw", masks_classes, masks_probs)
if pred.shape[2:4] != x.shape[2:4]:
pred = torch.nn.functional.interpolate(
pred,
size=x.shape[2:4],
mode="bilinear",
align_corners=False,
)
pred = pred.softmax(dim=1)
elif isinstance(pred, transformers.models.oneformer.modeling_oneformer.OneFormerForUniversalSegmentationOutput):
class_queries_logits = pred.class_queries_logits # [batch_size, num_queries, num_classes+1]
masks_queries_logits = pred.masks_queries_logits # [batch_size, num_queries, height, width]
masks_classes = class_queries_logits.softmax(dim=-1)[..., :-1]
masks_probs = masks_queries_logits.sigmoid() # [batch_size, num_queries, height, width]
# Semantic segmentation logits of shape (batch_size, num_classes, height, width)
pred = torch.einsum("bqc, bqhw -> bchw", masks_classes, masks_probs)
if pred.shape[2:4] != x.shape[2:4]:
pred = torch.nn.functional.interpolate(
pred,
size=x.shape[2:4],
mode="bilinear",
align_corners=False,
)
pred = pred.softmax(dim=1)
elif isinstance(pred, collections.OrderedDict):
pred = pred['out']
return pred
```
</details>
I think user should be able to get class probabilities in a more intuitive way. That's why I opened an original issue with a feature request.
If the requested feature is implemented, the user would be able to get class probabilities by just calling `probabilities = processor.post_processs_semantic_segmentation(predictions, target_sizes=size, class_proba=True)` with every model.
### Your contribution
Implementing it doesn't look hard, but it would require to change every ImageProcessor and FastImageProcessor that support semantic segmentation
`class_proba` is optional and False by default for backward compatibility.
```py
def post_process_semantic_segmentation(
self,
outputs,
target_sizes: Optional[List[Tuple]] = None,
class_proba: Optional[bool] = False,
):
"""
...
class_proba (`bool`, *optional*, defaults to `False`):
Whether to keep class probabilities.
Will return (N, H, W) segmentation maps if False and (N, C, H, W) class probabilities maps if True.
"""
...
...
...
# Old code:
# semantic_map = resized_logits[0].argmax(dim=0)
# New code:
if class_proba:
semantic_map = resized_logits[0].softmax(dim=0)
else:
semantic_map = resized_logits[0].argmax(dim=0)
...
...
...
# Old code:
# semantic_segmentation = logits.argmax(dim=1)
# New code:
if class_proba:
semantic_segmentation = logits.softmax(dim=1)
else:
semantic_segmentation = logits.argmax(dim=1)
```
UPD: looks like most of the custom metrics or losses (like Torchmetrics AUROC or SMP Jaccard Loss) support logits as inputs, so we do not need to convert logits to probabilities. But we anyway need to resize raw model outputs or convert class and mask queries, so the proposed feature anyway will be useful.
Also, we can give more control to user by implementing something like:
```py
def post_process_semantic_segmentation(
self,
outputs,
target_sizes: Optional[List[Tuple]] = None,
output_format: Optional[Literal["segmentation_maps", "logits", "class_proba"]] = "segmentation_maps",
):
"""
...
output_format(`str`, *optional*, defaults to `"segmentation_maps"`):
Output data format.
"segmentation_maps": will return (N, H, W) segmentation maps.
"class_proba": will return (N, C, H, W) class probabilities maps. Class probabilities are in range [0, 1] and sum to 1.
"logits": will return (N, C, H, W) raw logits.
"""
...
if output_format == "segmentation_maps":
semantic_map = resized_logits[0].argmax(dim=0)
elif output_format == "class_proba":
semantic_map = resized_logits[0].softmax(dim=0)
elif output_format == "logits":
semantic_map = resized_logits[0]
else:
raise ValueError("Unknown output format") | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37715/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37714/comments | https://api.github.com/repos/huggingface/transformers/issues/37714/events | https://github.com/huggingface/transformers/pull/37714 | 3,014,397,390 | PR_kwDOCUB6oc6TnWt1 | 37,714 | TP quant update | {
"login": "amd-xiaoyu12",
"id": 188109516,
"node_id": "U_kgDOCzZSzA",
"avatar_url": "https://avatars.githubusercontent.com/u/188109516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amd-xiaoyu12",
"html_url": "https://github.com/amd-xiaoyu12",
"followers_url": "https://api.github.com/users/amd-xiaoyu12/followers",
"following_url": "https://api.github.com/users/amd-xiaoyu12/following{/other_user}",
"gists_url": "https://api.github.com/users/amd-xiaoyu12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amd-xiaoyu12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amd-xiaoyu12/subscriptions",
"organizations_url": "https://api.github.com/users/amd-xiaoyu12/orgs",
"repos_url": "https://api.github.com/users/amd-xiaoyu12/repos",
"events_url": "https://api.github.com/users/amd-xiaoyu12/events{/privacy}",
"received_events_url": "https://api.github.com/users/amd-xiaoyu12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T15:23:53 | 2025-04-24T15:41:34 | 2025-04-23T15:39:49 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37714",
"html_url": "https://github.com/huggingface/transformers/pull/37714",
"diff_url": "https://github.com/huggingface/transformers/pull/37714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37714.patch",
"merged_at": null
} | # What does this PR do?
Update the tensor parallel for support generic quantized tensor with data type like int8 and scalar tensor.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Error-1:
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 658, in shard_and_distribute_module
[rank0]: param = tp_layer.partition_tensor(
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/integrations/tensor_parallel.py", line 385, in partition_tensor
[rank0]: return nn.Parameter(parameter) #, requires_grad=False)
[rank0]: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parameter.py", line 49, in __new__
[rank0]: t = data.detach().requires_grad_(requires_grad)
Error-2:
Get the scaler value of a 0-dim PySafeSlice #380
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "amd-xiaoyu12",
"id": 188109516,
"node_id": "U_kgDOCzZSzA",
"avatar_url": "https://avatars.githubusercontent.com/u/188109516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amd-xiaoyu12",
"html_url": "https://github.com/amd-xiaoyu12",
"followers_url": "https://api.github.com/users/amd-xiaoyu12/followers",
"following_url": "https://api.github.com/users/amd-xiaoyu12/following{/other_user}",
"gists_url": "https://api.github.com/users/amd-xiaoyu12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amd-xiaoyu12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amd-xiaoyu12/subscriptions",
"organizations_url": "https://api.github.com/users/amd-xiaoyu12/orgs",
"repos_url": "https://api.github.com/users/amd-xiaoyu12/repos",
"events_url": "https://api.github.com/users/amd-xiaoyu12/events{/privacy}",
"received_events_url": "https://api.github.com/users/amd-xiaoyu12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37714/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37713/comments | https://api.github.com/repos/huggingface/transformers/issues/37713/events | https://github.com/huggingface/transformers/issues/37713 | 3,014,356,685 | I_kwDOCUB6oc6zq27N | 37,713 | Loading and Saving Pretrained model to the same directory raises SafeTensorError: IOError | {
"login": "minerharry",
"id": 35383543,
"node_id": "MDQ6VXNlcjM1MzgzNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/35383543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minerharry",
"html_url": "https://github.com/minerharry",
"followers_url": "https://api.github.com/users/minerharry/followers",
"following_url": "https://api.github.com/users/minerharry/following{/other_user}",
"gists_url": "https://api.github.com/users/minerharry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minerharry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minerharry/subscriptions",
"organizations_url": "https://api.github.com/users/minerharry/orgs",
"repos_url": "https://api.github.com/users/minerharry/repos",
"events_url": "https://api.github.com/users/minerharry/events{/privacy}",
"received_events_url": "https://api.github.com/users/minerharry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-23T15:09:23 | 2025-06-02T08:03:08 | 2025-06-02T08:03:08 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.51.3
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.10.17
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA GeForce RTX 3060
### Who can help?
@amyeroberts, I guess? Not model-specific as far as I can tell
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM as Model
pretrained = "distilbert/distilgpt2"
model = Model.from_pretrained(pretrained) #load from remote - works fine
model.save_pretrained("local_model") #works fine
##later, could be in the same python instance or after the first has closed
model = Model.from_pretrained("local_model")
#ideally some finetuning here, but even with no code in between:
try:
model.save_pretrained("local_model") #raises SafeTensorError
except Exception as e:
import traceback as tb
tb.print_exc()
#I can save to a different location, though:
model.save_pretrained("local_model2")
#but even if I reload from the other folder
model = Model.from_pretrained("local_model2")
#it still breaks on the first folder
model.save_pretrained("local_model") #raises SafeTensorError
```
For the AutoModelForCausalLM, the last line breaks consistently in the script. However, if I instead use
```
from transformers import Mask2FormerForUniversalSegmentation as Model
pretrained = "facebook/mask2former-swin-small-ade-semantic"
```
The last line doesn't break, so saving/reloading from a temporary directory is a workaround. I think if the model is already on disk (Skipping the download+save lines) for the AutoModel it also works but it's been inconsistent.
However, what does consistently work is saving the model to another folder, *deleting the reference to the original model*, then loading:
```
#fresh python script, model already downloaded to local_model
from transformers import Mask2FormerForUniversalSegmentation as Model
model = Model.from_pretrained("local_model")
model.save_pretrained("local_model2")
#necessary
del model
model = Model.from_pretrained("local_model2")
model.save_pretrained("local_model") #works
```
### Expected behavior
I want to be able to save a locally loaded model to the folder from which it was loaded for autosave purposes. However, for the models I have tested (so far: Mask2FormerForUniversalSegmentation, MaskFormerForInstanceSegmentation, and AutoModelForCausalLM), this breaks with a SafeTensorError IOError. This seems like it might be related to https://github.com/huggingface/safetensors/issues/164, where a safetensor handle stayed open after loading parameters. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37713/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37713/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37712/comments | https://api.github.com/repos/huggingface/transformers/issues/37712/events | https://github.com/huggingface/transformers/issues/37712 | 3,014,277,687 | I_kwDOCUB6oc6zqjo3 | 37,712 | Very slow model instantiation | {
"login": "rvorias",
"id": 21060408,
"node_id": "MDQ6VXNlcjIxMDYwNDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/21060408?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rvorias",
"html_url": "https://github.com/rvorias",
"followers_url": "https://api.github.com/users/rvorias/followers",
"following_url": "https://api.github.com/users/rvorias/following{/other_user}",
"gists_url": "https://api.github.com/users/rvorias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rvorias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rvorias/subscriptions",
"organizations_url": "https://api.github.com/users/rvorias/orgs",
"repos_url": "https://api.github.com/users/rvorias/repos",
"events_url": "https://api.github.com/users/rvorias/events{/privacy}",
"received_events_url": "https://api.github.com/users/rvorias/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-04-23T14:41:49 | 2025-06-01T08:02:44 | 2025-06-01T08:02:44 | NONE | null | null | null | null | ### System Info
`transformers` version: 4.51.3
- Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX 4000 SFF Ada Generation
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", torch_dtype=torch_dtype, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
```
this is the culprit: https://github.com/huggingface/transformers/blob/9ec8be56ddab5e63524d2451735f92238a4d861b/src/transformers/safetensors_conversion.py#L26
update: when interrupting the process, it seems to be more likely it's stuck here: ` File "/home/vd/.venv/lib/python3.10/site-packages/torch/serialization.py", line 1888, in load_tensor
zip_file.get_storage_from_record(name, numel, torch.UntypedStorage)
KeyboardInterrupt`
Rollback to 4.46.1 does not have this issue.
### Expected behavior
NOT take 5 mins to create model instance. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37712/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37711/comments | https://api.github.com/repos/huggingface/transformers/issues/37711/events | https://github.com/huggingface/transformers/pull/37711 | 3,014,238,508 | PR_kwDOCUB6oc6Tm0HG | 37,711 | Skip red Internvl tests | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T14:27:57 | 2025-04-23T14:41:37 | 2025-04-23T14:39:47 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37711",
"html_url": "https://github.com/huggingface/transformers/pull/37711",
"diff_url": "https://github.com/huggingface/transformers/pull/37711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37711.patch",
"merged_at": null
} | Right now, a couple of InternVL tests are failing on `main`. I'm going to skip them for now to stop blocking other PRs, but it would be good to fix and un-skip them! cc @yonigozlan @zucchini-nlp | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37711/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37710/comments | https://api.github.com/repos/huggingface/transformers/issues/37710/events | https://github.com/huggingface/transformers/issues/37710 | 3,014,225,174 | I_kwDOCUB6oc6zqW0W | 37,710 | Can't perform inference with images on Gemma-3-12b-it-qat-int4.0 | {
"login": "njemanzedavid",
"id": 169468103,
"node_id": "U_kgDOChngxw",
"avatar_url": "https://avatars.githubusercontent.com/u/169468103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/njemanzedavid",
"html_url": "https://github.com/njemanzedavid",
"followers_url": "https://api.github.com/users/njemanzedavid/followers",
"following_url": "https://api.github.com/users/njemanzedavid/following{/other_user}",
"gists_url": "https://api.github.com/users/njemanzedavid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/njemanzedavid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njemanzedavid/subscriptions",
"organizations_url": "https://api.github.com/users/njemanzedavid/orgs",
"repos_url": "https://api.github.com/users/njemanzedavid/repos",
"events_url": "https://api.github.com/users/njemanzedavid/events{/privacy}",
"received_events_url": "https://api.github.com/users/njemanzedavid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-23T14:23:26 | 2025-06-01T08:02:46 | 2025-06-01T08:02:46 | NONE | null | null | null | null | ### System Info
hi, i am a noob here. please can you share a code snippet for how to use this gemma 3 version to perform inference on images? specific i want it to filter images from an input folder into different output folder based on a set of criteria i outline in the prompt. the prompt tells it to answer yes or no, if an image meets or doesn't meet the criteria, then my code uses that to move the images to their appropriate folders.
----------------------------
here is my prompt:
<image_soft_token>
Analyze the image. Does it meet BOTH criteria: 1. At least 2 football players visible. 2. At least one player performing a clear football action (kick, tackle, dribble, save etc.)? Answer ONLY YES or NO.
-----------
here is gemini 2.5 pro's suggestion:
Okay, the added print statement confirms it perfectly.
1. **The Variable is Correct:** The DEBUG: Prompt being passed to processor: output clearly shows the string does start with <image_soft_token>\n.... So, the variable PROMPT_TEXT_CLASSIFY **is** correctly updated and passed to the function.
2. **The Processor Fails:** Despite receiving the correct prompt string containing the <image_soft_token>, the processor's internal logic (/usr/local/lib/python3.11/dist-packages/transformers/models/gemma3/processing_gemma3.py, line 122) **still fails to detect it** and incorrectly reports finding 0 image tokens.
**Conclusion:**
This definitively looks like a **bug within the Gemma3Processor implementation in the transformers library** specifically for the model handle google/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized (or perhaps for Gemma 3 processing in general in the current library version).
The processor is simply not correctly parsing the special token it claims to use (<image_soft_token>) from the text input when an image is also provided.
------------
i am running this in google colab
Gemini 2.5 pro says (see above) it is a bug with the transformer architecture for this model, that i should report it on their github, but i just want to be sure it isn't actually due to my lack of knowledge. i would be very grateful for any help on this
### Who can help?
@amyeroberts, @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1oLoAIaOkUvoLHjRemWXB-CmDDsUrBrhB
---------------
stack traces:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 354.7/354.7 kB 26.7 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.1/76.1 MB 10.4 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 363.4/363.4 MB 4.1 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.8/13.8 MB 69.7 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 37.0 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 883.7/883.7 kB 41.5 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 664.8/664.8 MB 2.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 211.5/211.5 MB 5.5 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.3/56.3 MB 13.2 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 127.9/127.9 MB 7.6 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 207.5/207.5 MB 5.9 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.1/21.1 MB 104.2 MB/s eta 0:00:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 111.1 MB/s eta 0:00:00
--- Gemma 3 12B Instruct (Transformers) Image Sorting Script ---
GPU found: Tesla T4. Using GPU (cuda).
GPU Memory Status:
Tesla T4, 15360, 2, 15092
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
Google Drive mounted successfully at /content/drive.
Kaggle API key found at /root/.kaggle/kaggle.json.
4-bit quantization ENABLED via BitsAndBytes. Compute dtype: torch.bfloat16.
Input folder: /content/drive/MyDrive/Colab_Uploads/Football_Images_Input
Output 'Meets Criteria' folder: /content/drive/MyDrive/Colab_Uploads/Football_Images_Meets_Criteria_Gemma3_TF
Output 'Does Not Meet' folder: /content/drive/MyDrive/Colab_Uploads/Football_Images_Does_Not_Meet_Gemma3_TF
Error folder: /content/drive/MyDrive/Colab_Uploads/Football_Images_Errors_Gemma3_TF
Target Kaggle Model Handle: google/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized
Target data type: torch.bfloat16
Attempting to download model 'google/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized' using kagglehub...
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
kagglehub download finished in 3.82 seconds.
Model files downloaded to local path: /kaggle/input/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized/1
Essential config files found in download path.
Loading processor from local path: /kaggle/input/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized/1...
Processor loaded successfully.
Loading model from local path: /kaggle/input/gemma-3/transformers/gemma-3-12b-it-qat-int4-unquantized/1...
Quantization: Enabled (4-bit BNB)
Loading checkpoint shards: 100%
5/5 [03:37<00:00, 43.29s/it]
Model loaded successfully from local path.
Model placed on cuda.
Current GPU Memory Usage After Model Load:
Tesla T4, 15360, 8394, 6700
---------------
here is the error:
ERROR:root:STEP 3 FAILED: Error during processor preparation for laliga_image_100.jpeg: Prompt contained 0 image tokens but received 1 images.
Traceback (most recent call last):
File "<ipython-input-8-d006048ffea1>", line 40, in analyze_image_gemma3_transformers
inputs = processor(text=PROMPT_TEXT_CLASSIFY, images=img, return_tensors="pt").to(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/models/gemma3/processing_gemma3.py", line 122, in __call__
raise ValueError(
ValueError: Prompt contained 0 image tokens but received 1 images.
Found 457 image files in 'Colab_Uploads/Football_Images_Input'.
Starting Gemma 3 Transformers processing loop for 457 images...
Processing time depends on hardware (cuda).
--- DIAGNOSTIC MODE: Processing ONLY the first file: laliga_image_100.jpeg ---
--- Starting analysis for: laliga_image_100.jpeg ---
STEP 2 SUCCESS: Loaded image laliga_image_100.jpeg
DEBUG: Prompt being passed to processor:
>>>
<image_soft_token>
Analyze the image. Does it meet BOTH criteria: 1. At least 2 football players visible. 2. At least one player performing a clear football action (kick, tackle, dribble, save etc.)? Answer ONLY YES or NO.
<<<
--- Finished analysis attempt for: laliga_image_100.jpeg ---
--- DIAGNOSTIC MODE: Finished processing laliga_image_100.jpeg ---
--- Gemma 3 Transformers Processing Session Complete ---
Images attempted in this session (Gemma3 TF): 1
- Successfully classified (YES/NO): 0
- Errors (moved to 'Football_Images_Errors_Gemma3_TF'): 1
Images skipped (already processed): 0
Estimated image files remaining in 'Colab_Uploads/Football_Images_Input': 456
Check the 'Football_Images_Errors_Gemma3_TF' folder for Gemma 3 TF processing errors.
Results are in 'Football_Images_Meets_Criteria_Gemma3_TF' and 'Football_Images_Does_Not_Meet_Gemma3_TF'.
### Expected behavior
all the filtered images would have been moved from the input folders to their appropriate outputs based on if they meet the criteria in the prompt (YES) or don't (NO) or an error
--------
here is the output of a successful run using paligemma:
Found 457 potential image files in /content/drive/MyDrive/Colab_Uploads/Football_Images_Input.
Starting local processing using google/paligemma-3b-mix-224...
Processing time per image depends heavily on GPU performance.
Processing Images Locally: 100%
457/457 [05:29<00:00, 1.40it/s]
WARNING:root:Could not parse YES/NO from the end or whole string: 'Does this image show at least two football players AND are they performing an action (like dribbling, shooting, tackling)? Answer YES or NO.
answering does not require reading text in the image'
WARNING:root:Unexpected final decision or parse error for laliga_image_108.jpeg: Parsed='', Raw='
Does this image show at least two football players AND are they performing an action (like dribbling, shooting, tackling)? Answer YES or NO.
answering does not require reading text in the image'. Moving to error folder.
--- Processing Session Complete ---
Attempted processing for: 456 images in this session.
- Moved successfully (Yes/No): 456
- Errors (moved to error folder): 0
Skipped (already processed in previous runs): 0 images.
Estimated files remaining in input folder: 0
Check the 'Football_Images_Errors' folder for images that failed processing or caused errors.
Results in 'Football_Images_Meets_Criteria' and 'Football_Images_Does_Not_Meet'. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37710/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37709/comments | https://api.github.com/repos/huggingface/transformers/issues/37709/events | https://github.com/huggingface/transformers/pull/37709 | 3,014,147,953 | PR_kwDOCUB6oc6Tmf9U | 37,709 | [generate] skip compilation on cpu offload | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T14:03:12 | 2025-04-24T13:08:21 | 2025-04-24T13:08:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37709",
"html_url": "https://github.com/huggingface/transformers/pull/37709",
"diff_url": "https://github.com/huggingface/transformers/pull/37709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37709.patch",
"merged_at": "2025-04-24T13:08:18"
} | # What does this PR do?
`torch.compile` + model CPU offload is resulting in crashes. It should work in theory, but it's not working atm.
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
device_map = {"model.embed_tokens": 0, "model.layers.0": 0, "model.layers.1": "cpu", "model.norm": "cpu", "lm_head": 0}
model = AutoModelForCausalLM.from_pretrained(
"hf-internal-testing/tiny-random-MistralForCausalLM", device_map=device_map
)
tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-MistralForCausalLM")
tokenized_inputs = tokenizer(["Hello world"], return_tensors="pt")
input_ids = tokenized_inputs.input_ids.to(0)
# Uses a compilable cache -> compilation happens under the hood
output = model.generate(input_ids, max_new_tokens=20, cache_implementation="static")
```
This PR:
1. Moves the logic to trigger "auto compile" into its own function
2. Disables "auto compile" when there is CPU offload (and disk offload too, which is not expected to support torch.compile)
3. Adds a test to prevent regressions | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37709/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37708/comments | https://api.github.com/repos/huggingface/transformers/issues/37708/events | https://github.com/huggingface/transformers/pull/37708 | 3,014,134,723 | PR_kwDOCUB6oc6Tmc-M | 37,708 | [Fix] Fixing tp_plan is None error for caching_allocator_warmup | {
"login": "kcz358",
"id": 92624596,
"node_id": "U_kgDOBYVW1A",
"avatar_url": "https://avatars.githubusercontent.com/u/92624596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kcz358",
"html_url": "https://github.com/kcz358",
"followers_url": "https://api.github.com/users/kcz358/followers",
"following_url": "https://api.github.com/users/kcz358/following{/other_user}",
"gists_url": "https://api.github.com/users/kcz358/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kcz358/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kcz358/subscriptions",
"organizations_url": "https://api.github.com/users/kcz358/orgs",
"repos_url": "https://api.github.com/users/kcz358/repos",
"events_url": "https://api.github.com/users/kcz358/events{/privacy}",
"received_events_url": "https://api.github.com/users/kcz358/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T13:59:44 | 2025-06-05T09:03:04 | 2025-04-24T12:56:35 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37708",
"html_url": "https://github.com/huggingface/transformers/pull/37708",
"diff_url": "https://github.com/huggingface/transformers/pull/37708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37708.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #37663
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@MekkCyber
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "kcz358",
"id": 92624596,
"node_id": "U_kgDOBYVW1A",
"avatar_url": "https://avatars.githubusercontent.com/u/92624596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kcz358",
"html_url": "https://github.com/kcz358",
"followers_url": "https://api.github.com/users/kcz358/followers",
"following_url": "https://api.github.com/users/kcz358/following{/other_user}",
"gists_url": "https://api.github.com/users/kcz358/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kcz358/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kcz358/subscriptions",
"organizations_url": "https://api.github.com/users/kcz358/orgs",
"repos_url": "https://api.github.com/users/kcz358/repos",
"events_url": "https://api.github.com/users/kcz358/events{/privacy}",
"received_events_url": "https://api.github.com/users/kcz358/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37708/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37707/comments | https://api.github.com/repos/huggingface/transformers/issues/37707/events | https://github.com/huggingface/transformers/pull/37707 | 3,014,078,968 | PR_kwDOCUB6oc6TmQjm | 37,707 | TransfoXL is deprecated, don't keep it in tested examples! | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T13:43:02 | 2025-04-23T14:00:00 | 2025-04-23T13:59:38 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37707",
"html_url": "https://github.com/huggingface/transformers/pull/37707",
"diff_url": "https://github.com/huggingface/transformers/pull/37707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37707.patch",
"merged_at": "2025-04-23T13:59:38"
} | null | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37707/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37706/comments | https://api.github.com/repos/huggingface/transformers/issues/37706/events | https://github.com/huggingface/transformers/issues/37706 | 3,013,793,698 | I_kwDOCUB6oc6zotei | 37,706 | `last_cache_position` definition issue in hybrid SWA models | {
"login": "plienhar",
"id": 118842459,
"node_id": "U_kgDOBxVkWw",
"avatar_url": "https://avatars.githubusercontent.com/u/118842459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plienhar",
"html_url": "https://github.com/plienhar",
"followers_url": "https://api.github.com/users/plienhar/followers",
"following_url": "https://api.github.com/users/plienhar/following{/other_user}",
"gists_url": "https://api.github.com/users/plienhar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plienhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plienhar/subscriptions",
"organizations_url": "https://api.github.com/users/plienhar/orgs",
"repos_url": "https://api.github.com/users/plienhar/repos",
"events_url": "https://api.github.com/users/plienhar/events{/privacy}",
"received_events_url": "https://api.github.com/users/plienhar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-04-23T12:11:07 | 2025-06-28T08:02:52 | 2025-06-28T08:02:52 | NONE | null | null | null | null | ### System Info
```text
- transformers version: 4.51.2
- Platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
- Python version: 3.12.8
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.5.2
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA L40S
```
### Who can help?
@gante
### Reproduction
For the Cohere2 and Gemma2 models, if the `last_cache_position` argument is not supplied at runtime to their `Model.forward` method, it is created either using the 2D attention mask if supplied, or using the cache position tensor. From the source code:
```python
if last_cache_position is None:
last_cache_position = 0
if attention_mask is not None:
last_cache_position = (
attention_mask.shape[-1] if attention_mask.dim() == 2 else cache_position[-1].item()
)
```
However and by design: `attention_mask.shape[-1]` is never equal to `cache_position[-1].item()` but always equal to `cache_position[-1].item() + 1`.
This can be asserted using the following script:
```python
import torch
def create_cache_position(attention_mask_2d: torch.LongTensor, is_prefill: bool) -> torch.LongTensor:
# From tranformers.utils.GenerationMixin._get_initial_cache_position & _update_model_kwargs_for_generation
cache_position = torch.ones_like(attention_mask_2d[0, :], dtype=torch.int64).cumsum(0) - 1
if is_prefill:
return cache_position
else:
return cache_position[-1:]
def update_2d_attention_mask(attention_mask_2d: torch.LongTensor, padding_side: str) -> torch.LongTensor:
# From tranformers.utils.GenerationMixin._update_model_kwargs_for_generation
batch_size, _ = attention_mask_2d.shape
if padding_side == "left":
attention_mask_2d = torch.cat([attention_mask_2d, attention_mask_2d.new_ones((batch_size, 1))], dim=1)
else:
attention_mask_2d = torch.cat([attention_mask_2d.new_ones((batch_size, 1)), attention_mask_2d], dim=1)
return attention_mask_2d
# PREFILL
attention_mask_2d = torch.tensor([[1, 1, 1, 1, 1]], dtype=torch.int32)
cache_position = create_cache_position(attention_mask_2d, is_prefill=True)
assert attention_mask_2d.shape[-1] == cache_position[-1].item() + 1
# TOKEN GENERATION
attention_mask_2d = update_2d_attention_mask(attention_mask_2d, padding_side="left")
cache_position = create_cache_position(attention_mask_2d, is_prefill=False)
assert attention_mask_2d.shape[-1] == cache_position[-1].item() + 1
```
### Expected behavior
Defining `last_cache_position = attention_mask.shape[-1]` produces the expected behavior (and this is the behavior we get when using the `generate` API with the Cohere 2 model at least) so we just need to make `last_cache_position` consistent as follow:
```python
if last_cache_position is None:
last_cache_position = 0
if attention_mask is not None:
last_cache_position = attention_mask.shape[-1] if attention_mask.dim() == 2 else cache_position[-1].item() + 1
)
```
However, multiple doctrings and comments in the code describe `last_cache_position` as being identical to `cache_position[-1]`. If we choose to define `last_cache_position = cache_position[-1]`. Then the code above must be adjusted as follows:
```python
if last_cache_position is None:
last_cache_position = 0
if attention_mask is not None:
last_cache_position = attention_mask.shape[-1] - 1 if attention_mask.dim() == 2 else cache_position[-1].item()
)
```
On top of that, the attention mask subsetting operation in the decoder layer's `forward` method (which is the only place where `last_cache_position` is being used) must be adjusted to account for this change:
```python
effective_seq_len = max(cache_position.shape[0], self.sliding_window)
# ...
offset = (last_cache_position + 1) - effective_seq_len
offset = max(0, offset)
attention_mask = attention_mask[:, :, :, offset : offset + effective_seq_len]
```
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37706/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37706/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37705/comments | https://api.github.com/repos/huggingface/transformers/issues/37705/events | https://github.com/huggingface/transformers/issues/37705 | 3,013,753,326 | I_kwDOCUB6oc6zojnu | 37,705 | [i18n-Chinese] Translating model_doc/bert.md to Chinese | {
"login": "Nanji-Huaji",
"id": 63086636,
"node_id": "MDQ6VXNlcjYzMDg2NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/63086636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nanji-Huaji",
"html_url": "https://github.com/Nanji-Huaji",
"followers_url": "https://api.github.com/users/Nanji-Huaji/followers",
"following_url": "https://api.github.com/users/Nanji-Huaji/following{/other_user}",
"gists_url": "https://api.github.com/users/Nanji-Huaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nanji-Huaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nanji-Huaji/subscriptions",
"organizations_url": "https://api.github.com/users/Nanji-Huaji/orgs",
"repos_url": "https://api.github.com/users/Nanji-Huaji/repos",
"events_url": "https://api.github.com/users/Nanji-Huaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nanji-Huaji/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | null | [] | 2025-04-23T11:56:26 | 2025-05-19T17:19:55 | 2025-05-19T17:19:55 | CONTRIBUTOR | null | null | null | null | I notice that no one has translated `docs/source/en/model_doc/bert.md` to Chinese. Could I translate it to Chinese? | {
"login": "Nanji-Huaji",
"id": 63086636,
"node_id": "MDQ6VXNlcjYzMDg2NjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/63086636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nanji-Huaji",
"html_url": "https://github.com/Nanji-Huaji",
"followers_url": "https://api.github.com/users/Nanji-Huaji/followers",
"following_url": "https://api.github.com/users/Nanji-Huaji/following{/other_user}",
"gists_url": "https://api.github.com/users/Nanji-Huaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nanji-Huaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nanji-Huaji/subscriptions",
"organizations_url": "https://api.github.com/users/Nanji-Huaji/orgs",
"repos_url": "https://api.github.com/users/Nanji-Huaji/repos",
"events_url": "https://api.github.com/users/Nanji-Huaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nanji-Huaji/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37705/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37704/comments | https://api.github.com/repos/huggingface/transformers/issues/37704/events | https://github.com/huggingface/transformers/pull/37704 | 3,013,721,221 | PR_kwDOCUB6oc6TlCCV | 37,704 | fix: learning_rate logged as tensor causing save issue with deepspeed | {
"login": "NanoCode012",
"id": 9899957,
"node_id": "MDQ6VXNlcjk4OTk5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9899957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NanoCode012",
"html_url": "https://github.com/NanoCode012",
"followers_url": "https://api.github.com/users/NanoCode012/followers",
"following_url": "https://api.github.com/users/NanoCode012/following{/other_user}",
"gists_url": "https://api.github.com/users/NanoCode012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NanoCode012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NanoCode012/subscriptions",
"organizations_url": "https://api.github.com/users/NanoCode012/orgs",
"repos_url": "https://api.github.com/users/NanoCode012/repos",
"events_url": "https://api.github.com/users/NanoCode012/events{/privacy}",
"received_events_url": "https://api.github.com/users/NanoCode012/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T11:43:08 | 2025-04-24T11:09:41 | 2025-04-24T10:20:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37704",
"html_url": "https://github.com/huggingface/transformers/pull/37704",
"diff_url": "https://github.com/huggingface/transformers/pull/37704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37704.patch",
"merged_at": "2025-04-24T10:20:47"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I found that fine-tuning Llama4 text-only mode with flex attention and deepspeed zero1 causes the learning_rate to be logged as a tensor.
The error is: `TypeError: Object of type Tensor is not JSON serializable`. Looking at past issues point to it being a deepspeed/transformers issue. However, I tried to swap deepspeed versions across 0.14.4, 0.15.4, 0.16.7 and received the same issue. (0.13.1 has some api incompatibility with transformers)
I found that the learning_rate was logged as tensor
before fix
```
{'loss': 1.1754, 'learning_rate': tensor(1.0000e-05), 'epoch': 0.0}
{'loss': 1.0044, 'learning_rate': tensor(1.5000e-05), 'epoch': 0.0}
{'loss': 0.9941, 'learning_rate': tensor(2.0000e-05), 'epoch': 0.0}
{'loss': 1.1097, 'learning_rate': tensor(2.5000e-05), 'epoch': 0.0}
```
(ignore missing grad_norm, I was testing whether it was grad_norm's fault)
with fix
```
{'loss': 1.1754, 'grad_norm': 0.7218227982521057, 'learning_rate': 9.999999747378752e-06, 'epoch': 0.0}
{'loss': 1.0051, 'grad_norm': 0.6964300870895386, 'learning_rate': 1.500000053056283e-05, 'epoch': 0.0}
{'loss': 0.993, 'grad_norm': 0.6806543469429016, 'learning_rate': 1.9999999494757503e-05, 'epoch': 0.0}
{'loss': 1.1091, 'grad_norm': 0.7640533447265625, 'learning_rate': 2.499999936844688e-05, 'epoch': 0.0}
{'loss': 1.2739, 'grad_norm': 0.8041514158248901, 'learning_rate': 3.000000106112566e-05, 'epoch': 0.0}
```
This was run in Axolotl. Config [here](https://gist.github.com/NanoCode012/9dcdb653dea7f9da0d9ea8970763ee3e)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37704/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37703/comments | https://api.github.com/repos/huggingface/transformers/issues/37703/events | https://github.com/huggingface/transformers/pull/37703 | 3,013,684,083 | PR_kwDOCUB6oc6Tk51T | 37,703 | [CI image] add back sacrebleu | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T11:28:04 | 2025-04-23T13:45:07 | 2025-04-23T13:45:07 | MEMBER | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37703",
"html_url": "https://github.com/huggingface/transformers/pull/37703",
"diff_url": "https://github.com/huggingface/transformers/pull/37703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37703.patch",
"merged_at": null
} | # What does this PR do?
DO NOT MERGE
CI image builder for #37700 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37703/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37702/comments | https://api.github.com/repos/huggingface/transformers/issues/37702/events | https://github.com/huggingface/transformers/issues/37702 | 3,013,679,195 | I_kwDOCUB6oc6zoRhb | 37,702 | Possibly wrong position_ids shape in GPT2Model doc | {
"login": "kkew3",
"id": 13264071,
"node_id": "MDQ6VXNlcjEzMjY0MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/13264071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkew3",
"html_url": "https://github.com/kkew3",
"followers_url": "https://api.github.com/users/kkew3/followers",
"following_url": "https://api.github.com/users/kkew3/following{/other_user}",
"gists_url": "https://api.github.com/users/kkew3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkew3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkew3/subscriptions",
"organizations_url": "https://api.github.com/users/kkew3/orgs",
"repos_url": "https://api.github.com/users/kkew3/repos",
"events_url": "https://api.github.com/users/kkew3/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkew3/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T11:25:54 | 2025-05-04T00:32:51 | 2025-04-24T14:36:04 | CONTRIBUTOR | null | null | null | null | The docs [GPT2Model > position_ids](https://huggingface.co/docs/transformers/v4.51.3/model_doc/gpt2#transformers.GPT2Model.forward.position_ids) and [GPT2LMHeadModel > position_ids](https://huggingface.co/docs/transformers/v4.51.3/model_doc/gpt2#transformers.GPT2LMHeadModel.forward.position_ids) appear to be wrong.
IMO the `position_ids` shape should be `(batch_size, input_ids_length)` rather than `(batch_size, sequence_length)`, since in the code:
https://github.com/huggingface/transformers/blob/fee1190601b5d04ec6d3f7f58fd22788d7f3236d/src/transformers/models/gpt2/modeling_gpt2.py#L918-L921
clearly `input_ids` and `position_ids` should have the same shape.
Attached below is the screenshot of the doc website for `v4.51.3`:
 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37702/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37701/comments | https://api.github.com/repos/huggingface/transformers/issues/37701/events | https://github.com/huggingface/transformers/pull/37701 | 3,013,583,607 | PR_kwDOCUB6oc6Tkj-r | 37,701 | Fix inference bugs in Qwen2.5 Omni | {
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T10:47:51 | 2025-04-24T08:51:44 | 2025-04-24T08:51:44 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37701",
"html_url": "https://github.com/huggingface/transformers/pull/37701",
"diff_url": "https://github.com/huggingface/transformers/pull/37701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37701.patch",
"merged_at": "2025-04-24T08:51:44"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes some inference issues
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @zucchini-nlp
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37701/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37700/comments | https://api.github.com/repos/huggingface/transformers/issues/37700/events | https://github.com/huggingface/transformers/pull/37700 | 3,013,538,082 | PR_kwDOCUB6oc6TkZ9R | 37,700 | [CI] add back `sacrebleu` (and document why) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T10:30:00 | 2025-04-23T13:45:03 | 2025-04-23T13:45:00 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37700",
"html_url": "https://github.com/huggingface/transformers/pull/37700",
"diff_url": "https://github.com/huggingface/transformers/pull/37700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37700.patch",
"merged_at": "2025-04-23T13:45:00"
} | # What does this PR do?
Reverts the removed dep in #37676 , and documents why
(TL;DR: `sacrebleu` is not in present `transformers`. However, calling `evaluate.load("sacrebleu")` requires it, and our `Trainer` tests use it under the hood, through the `run_translation.py` example) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37700/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37699/comments | https://api.github.com/repos/huggingface/transformers/issues/37699/events | https://github.com/huggingface/transformers/issues/37699 | 3,013,440,505 | I_kwDOCUB6oc6znXP5 | 37,699 | tokenizer.convert_tokens_to_ids inconsistent with tokenizer forward in CLIPTokenizer | {
"login": "ysig",
"id": 28439529,
"node_id": "MDQ6VXNlcjI4NDM5NTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ysig",
"html_url": "https://github.com/ysig",
"followers_url": "https://api.github.com/users/ysig/followers",
"following_url": "https://api.github.com/users/ysig/following{/other_user}",
"gists_url": "https://api.github.com/users/ysig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ysig/subscriptions",
"organizations_url": "https://api.github.com/users/ysig/orgs",
"repos_url": "https://api.github.com/users/ysig/repos",
"events_url": "https://api.github.com/users/ysig/events{/privacy}",
"received_events_url": "https://api.github.com/users/ysig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-23T10:01:08 | 2025-04-23T12:32:32 | 2025-04-23T12:32:17 | NONE | null | null | null | null | Only `openai/clip-vit-base-patch32` was downloaded `14,408,028` over the last month!
Yet the clip tokenizer has a straightout bug:
```
>> from transformers import CLIPTokenizer
>> tokenizer = CLIPTokenizer.from_pretrained('openai/clip-vit-base-patch32')
>> token = 'my'
>> print(
tokenizer.convert_tokens_to_ids([token])[0],
tokenizer(
[token],
padding=True,
return_tensors="pt",
).input_ids[0][1].item()
)
>> 1152, 607
```
Even if there is an internal reason why this happens, it should be fixed or warned!
### Expected behavior
```
>> 1152, 1152
``` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37699/timeline | null | not_planned | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37698/comments | https://api.github.com/repos/huggingface/transformers/issues/37698/events | https://github.com/huggingface/transformers/pull/37698 | 3,013,394,912 | PR_kwDOCUB6oc6Tj5u2 | 37,698 | [performance_optim] define flash attention mask on NPU device directly | {
"login": "FightingZhen",
"id": 26176607,
"node_id": "MDQ6VXNlcjI2MTc2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/26176607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FightingZhen",
"html_url": "https://github.com/FightingZhen",
"followers_url": "https://api.github.com/users/FightingZhen/followers",
"following_url": "https://api.github.com/users/FightingZhen/following{/other_user}",
"gists_url": "https://api.github.com/users/FightingZhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FightingZhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FightingZhen/subscriptions",
"organizations_url": "https://api.github.com/users/FightingZhen/orgs",
"repos_url": "https://api.github.com/users/FightingZhen/repos",
"events_url": "https://api.github.com/users/FightingZhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/FightingZhen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T09:47:17 | 2025-08-14T01:52:42 | 2025-04-24T12:06:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37698",
"html_url": "https://github.com/huggingface/transformers/pull/37698",
"diff_url": "https://github.com/huggingface/transformers/pull/37698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37698.patch",
"merged_at": "2025-04-24T12:06:47"
} | # What does this PR do?
When using Flash Attention2 on Ascend NPU, we have found that CPU memory keep increasing when calling func `npu_flash_attn_varlen_func` or `npu_flash_attn_func`.
The root cause is that the attention mask generated by func `torch.ones()` is initially defined on the CPU side, occupying CPU memory before being transferred to the NPU device. As the func `npu_flash_attn_varlen_func` or `npu_flash_attn_func` is called repeatedly, the CPU memory consumption continues to accumulate, which is not optimal solution. Below is one example: https://github.com/huggingface/transformers/blob/12f65ee7520404b511c7fe716bc5d48d93d8297d/src/transformers/integrations/npu_flash_attention.py#L225
Therefore, this PR is committed for solving this problem by defining attention mask tensor with `torch.ones()` on NPU device directy.
Fixes # (issue)
Not releated.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37698/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37697/comments | https://api.github.com/repos/huggingface/transformers/issues/37697/events | https://github.com/huggingface/transformers/pull/37697 | 3,013,253,215 | PR_kwDOCUB6oc6TjZop | 37,697 | Fix torchao doc examples | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T09:10:46 | 2025-04-24T09:10:29 | 2025-04-24T09:10:27 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37697",
"html_url": "https://github.com/huggingface/transformers/pull/37697",
"diff_url": "https://github.com/huggingface/transformers/pull/37697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37697.patch",
"merged_at": "2025-04-24T09:10:27"
} | # What does this PR do?
Fixes torchao docs examples after https://github.com/huggingface/transformers/pull/37592
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37697/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37696/comments | https://api.github.com/repos/huggingface/transformers/issues/37696/events | https://github.com/huggingface/transformers/pull/37696 | 3,013,160,804 | PR_kwDOCUB6oc6TjFUG | 37,696 | [pipeline] allow image-text-to-text generate from text inputs | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T08:38:33 | 2025-04-23T14:52:43 | 2025-04-23T14:50:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37696",
"html_url": "https://github.com/huggingface/transformers/pull/37696",
"diff_url": "https://github.com/huggingface/transformers/pull/37696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37696.patch",
"merged_at": null
} | # What does this PR do?
As per title, passing empty list of images triggers `image_processor` call which in turn fails. It's because we usually check for `Noneness` in processors
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37696/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37695/comments | https://api.github.com/repos/huggingface/transformers/issues/37695/events | https://github.com/huggingface/transformers/pull/37695 | 3,013,101,767 | PR_kwDOCUB6oc6Ti4jk | 37,695 | Pin torch == 2.6 on PR CI docker images for now | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T08:15:15 | 2025-04-29T12:16:24 | 2025-04-23T09:47:23 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37695",
"html_url": "https://github.com/huggingface/transformers/pull/37695",
"diff_url": "https://github.com/huggingface/transformers/pull/37695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37695.patch",
"merged_at": "2025-04-23T09:47:23"
} | # What does this PR do?
torch 2.7 is out today, but it causes strange issues , see below
Let's pin 2.6 for now on CircleCI
### error log
```
File "/usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py", line 62, in <module>
from .integrations.flex_attention import flex_attention_forward
File "/usr/local/lib/python3.9/site-packages/transformers/integrations/flex_attention.py", line 39, in <module>
from torch.nn.attention.flex_attention import BlockMask, flex_attention
File "/usr/local/lib/python3.9/site-packages/torch/nn/attention/flex_attention.py", line 15, in <module>
from torch._dynamo._trace_wrapped_higher_order_op import TransformGetItemToIndex
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/__init__.py", line 53, in <module>
from .polyfills import loader as _ # usort: skip # noqa: F401
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 25, in <module>
POLYFILLED_MODULES: tuple["ModuleType", ...] = tuple(
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/loader.py", line 26, in <genexpr>
importlib.import_module(f".{submodule}", package=polyfills.__name__)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/polyfills/builtins.py", line 31, in <module>
def all(iterable: Iterable[object], /) -> bool:
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/decorators.py", line 427, in wrapper
rule_map: dict[Any, type[VariableTracker]] = get_torch_obj_rule_map()
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2870, in get_torch_obj_rule_map
obj = load_object(k)
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2901, in load_object
val = _load_obj_from_str(x[0])
File "/usr/local/lib/python3.9/site-packages/torch/_dynamo/trace_rules.py", line 2885, in _load_obj_from_str
return getattr(importlib.import_module(module), obj_name)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.9/site-packages/torch/_higher_order_ops/map.py", line 6, in <module>
from torch._functorch.aot_autograd import AOTConfig, create_joint
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/aot_autograd.py", line 135, in <module>
from .partitioners import default_partition
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/partitioners.py", line 37, in <module>
from ._activation_checkpointing.graph_info_provider import GraphInfoProvider
File "/usr/local/lib/python3.9/site-packages/torch/_functorch/_activation_checkpointing/graph_info_provider.py", line 3, in <module>
import networkx as nx
File "/usr/local/lib/python3.9/site-packages/networkx/__init__.py", line 19, in <module>
from networkx import utils
File "/usr/local/lib/python3.9/site-packages/networkx/utils/__init__.py", line 7, in <module>
from networkx.utils.backends import *
File "/usr/local/lib/python3.9/site-packages/networkx/utils/backends.py", line 258, in <module>
backends = _get_backends("networkx.backends")
File "/usr/local/lib/python3.9/site-packages/networkx/utils/backends.py", line 234, in _get_backends
items = entry_points(group=group)
TypeError: entry_points() got an unexpected keyword argument 'group'
``` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37695/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37694/comments | https://api.github.com/repos/huggingface/transformers/issues/37694/events | https://github.com/huggingface/transformers/pull/37694 | 3,013,051,960 | PR_kwDOCUB6oc6Titwz | 37,694 | Fix typos in comments | {
"login": "co63oc",
"id": 4617245,
"node_id": "MDQ6VXNlcjQ2MTcyNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4617245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/co63oc",
"html_url": "https://github.com/co63oc",
"followers_url": "https://api.github.com/users/co63oc/followers",
"following_url": "https://api.github.com/users/co63oc/following{/other_user}",
"gists_url": "https://api.github.com/users/co63oc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/co63oc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/co63oc/subscriptions",
"organizations_url": "https://api.github.com/users/co63oc/orgs",
"repos_url": "https://api.github.com/users/co63oc/repos",
"events_url": "https://api.github.com/users/co63oc/events{/privacy}",
"received_events_url": "https://api.github.com/users/co63oc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T07:54:58 | 2025-04-30T00:53:19 | 2025-04-24T14:59:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37694",
"html_url": "https://github.com/huggingface/transformers/pull/37694",
"diff_url": "https://github.com/huggingface/transformers/pull/37694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37694.patch",
"merged_at": "2025-04-24T14:59:56"
} | # What does this PR do?
Fix typos in comments found by codespell.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37694/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37693/comments | https://api.github.com/repos/huggingface/transformers/issues/37693/events | https://github.com/huggingface/transformers/pull/37693 | 3,012,836,988 | PR_kwDOCUB6oc6Th_Oe | 37,693 | Make sure torch_is_available before using torch.distributed | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T06:26:16 | 2025-04-24T09:31:38 | 2025-04-24T09:31:35 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37693",
"html_url": "https://github.com/huggingface/transformers/pull/37693",
"diff_url": "https://github.com/huggingface/transformers/pull/37693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37693.patch",
"merged_at": "2025-04-24T09:31:35"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/37680
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37693/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37693/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37692/comments | https://api.github.com/repos/huggingface/transformers/issues/37692/events | https://github.com/huggingface/transformers/issues/37692 | 3,012,819,216 | I_kwDOCUB6oc6zk_kQ | 37,692 | qwen2_5_omni initialize bug. | {
"login": "kobenaxie",
"id": 22359441,
"node_id": "MDQ6VXNlcjIyMzU5NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/22359441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kobenaxie",
"html_url": "https://github.com/kobenaxie",
"followers_url": "https://api.github.com/users/kobenaxie/followers",
"following_url": "https://api.github.com/users/kobenaxie/following{/other_user}",
"gists_url": "https://api.github.com/users/kobenaxie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kobenaxie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kobenaxie/subscriptions",
"organizations_url": "https://api.github.com/users/kobenaxie/orgs",
"repos_url": "https://api.github.com/users/kobenaxie/repos",
"events_url": "https://api.github.com/users/kobenaxie/events{/privacy}",
"received_events_url": "https://api.github.com/users/kobenaxie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T06:17:27 | 2025-04-23T07:31:30 | 2025-04-23T07:31:30 | NONE | null | null | null | null | As menthioned asd https://github.com/huggingface/transformers/pull/37690
`model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B")` get error `AttributeError: type object 'Qwen2_5OmniConfig' has no attribute 'thinker_config'`
https://github.com/huggingface/transformers/blob/1d9743edc2030e8444d5bdffa910a3d6822bcf2d/src/transformers/models/qwen2_5_omni/configuration_qwen2_5_omni.py#L1061
`pip install git+https://github.com/huggingface/transformers@1d9743edc2030e8444d5bdffa910a3d6822bcf2d` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37692/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/37692/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37691/comments | https://api.github.com/repos/huggingface/transformers/issues/37691/events | https://github.com/huggingface/transformers/pull/37691 | 3,012,685,554 | PR_kwDOCUB6oc6Thex- | 37,691 | fix mpt test of different outputs from cuda | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-23T04:53:54 | 2025-07-02T05:22:25 | 2025-04-25T16:04:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37691",
"html_url": "https://github.com/huggingface/transformers/pull/37691",
"diff_url": "https://github.com/huggingface/transformers/pull/37691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37691.patch",
"merged_at": "2025-04-25T16:04:56"
} | The tests `RUN_SLOW=1 pytest -rA tests/models/mpt/test_modeling_mpt.py::MptIntegrationTests::test_generation_batched` got result error on XPU.
The outputs of the MPT model are slightly different from CUDA, we need another result check for XPU. Besides, we need to use require_deterministic_for_xpu so xpu can get stable outputs.
Hi @SunMarc , please review this PR. Thanks | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37691/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37690/comments | https://api.github.com/repos/huggingface/transformers/issues/37690/events | https://github.com/huggingface/transformers/pull/37690 | 3,012,238,895 | PR_kwDOCUB6oc6TgCZd | 37,690 | fix: remove classmethod from `Qwen2_5OmniConfig.get_text_config` | {
"login": "shahruk10",
"id": 26173976,
"node_id": "MDQ6VXNlcjI2MTczOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26173976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shahruk10",
"html_url": "https://github.com/shahruk10",
"followers_url": "https://api.github.com/users/shahruk10/followers",
"following_url": "https://api.github.com/users/shahruk10/following{/other_user}",
"gists_url": "https://api.github.com/users/shahruk10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shahruk10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shahruk10/subscriptions",
"organizations_url": "https://api.github.com/users/shahruk10/orgs",
"repos_url": "https://api.github.com/users/shahruk10/repos",
"events_url": "https://api.github.com/users/shahruk10/events{/privacy}",
"received_events_url": "https://api.github.com/users/shahruk10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T22:09:33 | 2025-04-23T14:52:18 | 2025-04-23T07:30:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37690",
"html_url": "https://github.com/huggingface/transformers/pull/37690",
"diff_url": "https://github.com/huggingface/transformers/pull/37690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37690.patch",
"merged_at": "2025-04-23T07:30:57"
} | # What does this PR do?
- Since the `get_text_config` references an instance variable within the class (`self.thinker_config`), the `get_text_config` method should not be a classmethod.
- Before this fix, users were getting the following error:
''' AttributeError: type object 'Qwen2_5OmniConfig' has no attribute 'thinker_config' '''
- This commit simply removes the `classmethod` wrapper from the `get_text_config` method | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37690/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37690/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37689/comments | https://api.github.com/repos/huggingface/transformers/issues/37689/events | https://github.com/huggingface/transformers/pull/37689 | 3,011,895,224 | PR_kwDOCUB6oc6Te3Ie | 37,689 | [FSDP2] enable save_pretrained() | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T19:06:18 | 2025-06-02T14:17:51 | 2025-06-02T14:17:50 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37689",
"html_url": "https://github.com/huggingface/transformers/pull/37689",
"diff_url": "https://github.com/huggingface/transformers/pull/37689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37689.patch",
"merged_at": null
} | A small PR enabling `save_pretrained()` for FSDP2, it only changes the method of getting the state dict as `model.state_dict()` returns a sharded one for FSDP2 ([change documented here](https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md)), we need to use the `get_model_state_dict()` API.
In conjuction with [#3527](https://github.com/huggingface/accelerate/pull/3527) in accelerate FSDP2 should fully support both `save_pretrained/save_state`
@SunMarc
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37689/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37688/comments | https://api.github.com/repos/huggingface/transformers/issues/37688/events | https://github.com/huggingface/transformers/pull/37688 | 3,011,776,802 | PR_kwDOCUB6oc6TedbL | 37,688 | Fixes Llama4 cpu_offload compatibility | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T18:09:12 | 2025-09-25T18:52:49 | 2025-09-25T18:52:49 | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37688",
"html_url": "https://github.com/huggingface/transformers/pull/37688",
"diff_url": "https://github.com/huggingface/transformers/pull/37688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37688.patch",
"merged_at": null
} | # What does this PR do?
For some reason llama4 sent input_ids to embed_tokens.weight.device before passing the data. This is not done in llama:
https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L534
Additionally llama4 sends `cache_position.to(self.device)`, also not done in llama, and also incompatible with cpu_offload.
The code as is is not compatible with `accelerate.cpu_offload`, since the weight.device/self.device will be `meta`.
This PR:
1. Aligns calls to embed_tokens between llama4 & llama
2. Aligns usage of cache_position in causal mask updates
3. Enables compatibility with `accelerate.cpu_offload`
4. Simplifies code
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37688/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37687/comments | https://api.github.com/repos/huggingface/transformers/issues/37687/events | https://github.com/huggingface/transformers/pull/37687 | 3,011,666,308 | PR_kwDOCUB6oc6TeFlI | 37,687 | Handle audio/ video default arguments in processor's apply_chat_template | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-22T17:16:24 | 2025-04-25T17:09:41 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37687",
"html_url": "https://github.com/huggingface/transformers/pull/37687",
"diff_url": "https://github.com/huggingface/transformers/pull/37687.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37687.patch",
"merged_at": null
} | # What does this PR do?
For multimodal models, it is often required to pass kwargs indicating how the passed inputs have been sampled for the Processor: `fps` for video inputs, `sampling_rate` for audio inputs.
Such values can be set as default values for in `ModelProcessorKwargs`, nevertheless they are not passed to the processor's `__call__` method when using `apply_chat_template`, resulting in silent errors.
For example in `Qwen2.5-VL`:
```python
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-7B-Instruct")
conversation = [
{
"role": "user",
"content": [
{"type": "video", "path": "example.mp4"},
{"type": "text", "text": "What happened in the video?"},
],
}
]
inputs = processor.apply_chat_template(
conversation,
video_fps=1,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
)
```
→ images will be sampled at 1 fps
Yet such value (`fps=1`) is not passed in `kwargs` in:
```python
out = self(
text=prompt,
images=batch_images if batch_images else None,
videos=batch_videos if batch_videos else None,
audio=batch_audios if batch_audios else None,
**kwargs,
)
```
making that the default value `fps=2` is used in the processor's `__call__` method.
This is also the case for audio processors that need to know at which `sampling_rate` the audio has been sampled.
# Fix attempt
I provide an attempt to fix by initialising such values to the specified default value in the `ModelProcessorKwargs` if provided, and passing this value to the Processors `__call__` kwargs.
cc @zucchini-nlp and @molbap since we talked offline about this
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37687/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37687/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37686/comments | https://api.github.com/repos/huggingface/transformers/issues/37686/events | https://github.com/huggingface/transformers/issues/37686 | 3,011,664,924 | I_kwDOCUB6oc6zglwc | 37,686 | Tokenizing with `apply_chat_template` behaves differently from regular tokenizing | {
"login": "sayanshaw24",
"id": 52221015,
"node_id": "MDQ6VXNlcjUyMjIxMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/52221015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sayanshaw24",
"html_url": "https://github.com/sayanshaw24",
"followers_url": "https://api.github.com/users/sayanshaw24/followers",
"following_url": "https://api.github.com/users/sayanshaw24/following{/other_user}",
"gists_url": "https://api.github.com/users/sayanshaw24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sayanshaw24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayanshaw24/subscriptions",
"organizations_url": "https://api.github.com/users/sayanshaw24/orgs",
"repos_url": "https://api.github.com/users/sayanshaw24/repos",
"events_url": "https://api.github.com/users/sayanshaw24/events{/privacy}",
"received_events_url": "https://api.github.com/users/sayanshaw24/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T17:15:41 | 2025-05-31T08:02:42 | 2025-05-31T08:02:42 | NONE | null | null | null | null | ### System Info
Using the latest `transformers v4.51.3` and Python 3.11.9 on Linux (but the problem is platform generic), tokenization with `apply_chat_template` when setting `tokenize = True` behaves differently from first calling `apply_chat_template` and then `encode` on the result of that.
For instance, with the `google/gemma-3-1b-it` tokenizer in this example:
```
Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer
>>> model_id = "google/gemma-3-1b-it"
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> messages = [
... [
... {
... "role": "user",
... "content": [{"type": "text", "text": "What is 2 + 3?"},]
... },
... ],
... ]
>>> tokenizer.apply_chat_template(messages, tokenize=False)
['<bos><start_of_turn>user\nWhat is 2 + 3?<end_of_turn>\n']
```
The results of tokenization with `apply_chat_template` when setting `tokenize = True` is as follows:
```
>>> tokenizer.apply_chat_template(messages, tokenize=True)
[[2, 105, 2364, 107, 3689, 563, 236743, 236778, 900, 236743, 236800, 236881, 106, 107]]
```
whereas doing this separately using `encode` results in:
```
>>> tokenizer.encode(tokenizer.apply_chat_template(messages, tokenize=False)[0])
[2, 2, 105, 2364, 107, 3689, 563, 236743, 236778, 900, 236743, 236800, 236881, 106, 107]
```
Note that first calling `apply_chat_template` and then `encode` on the result of that results in two BOS tokens (input id `2`), which is redundant (this is the only difference, even on other examples). Also, the result of this should logically be the same as tokenization with `apply_chat_template` when setting `tokenize = True`.
### Who can help?
@ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Ensure environment parity (latest `transformers v4.51.3` and Python 3.11.9 on Linux)
2. Load HF tokenizer for `google/gemma-3-1b-it`
3. Run script with above example (i.e., call `apply_chat_template` and set `tokenize = True` and compare with the results of first calling `apply_chat_template` with `tokenize = False` and then `encode` on the result of that.)
### Expected behavior
Tokenization with `apply_chat_template` when setting `tokenize = True` should produce the same result as first calling `apply_chat_template` and then `encode` on the result of that.
Also, logically there should not be 2 BOS tokens in the result of the latter since it is redundant. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37686/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37685/comments | https://api.github.com/repos/huggingface/transformers/issues/37685/events | https://github.com/huggingface/transformers/pull/37685 | 3,011,626,848 | PR_kwDOCUB6oc6Td9B7 | 37,685 | [cleanup] remove `/model_cards` 🧹 🧹 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T16:57:03 | 2025-04-23T11:45:31 | 2025-04-23T11:45:27 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37685",
"html_url": "https://github.com/huggingface/transformers/pull/37685",
"diff_url": "https://github.com/huggingface/transformers/pull/37685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37685.patch",
"merged_at": "2025-04-23T11:45:27"
} | # What does this PR do?
As also discussed with @LysandreJik -- `/model_cards` holds a README regarding the old model cards, which used to live in `transformers`. We have long moved away from them :) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37685/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37684/comments | https://api.github.com/repos/huggingface/transformers/issues/37684/events | https://github.com/huggingface/transformers/pull/37684 | 3,011,613,641 | PR_kwDOCUB6oc6Td6N9 | 37,684 | [tests] reorganize cache tests and clean memory between tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T16:50:31 | 2025-04-29T11:21:17 | 2025-04-29T11:21:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37684",
"html_url": "https://github.com/huggingface/transformers/pull/37684",
"diff_url": "https://github.com/huggingface/transformers/pull/37684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37684.patch",
"merged_at": "2025-04-29T11:21:14"
} | # What does this PR do?
#37394 introduces a cache change, but there are several cache test issues at the moment, preventing a clean merge.
This PR:
1. Removes the `@slow` decorator from a few tests (change pulled from #37394)
2. Deletes `test_static_cache_greedy_decoding_pad_right`, as it doesn't make sense to generate with right padding (change pulled from #37394)
3. Moves `torch.export()` cache tests to its own class
4. Cleans device memory between test runs. On some devices, we were observing CPU weight offloading after a few tests, massively slowing down runtimes. Cleaning the device memory between tests fixes it.
_____________________________________________________
Follow-up PR, before retesting #37394:
1. convert as many `@slow` tests into fast tests as possible
2. fix failing tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37684/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37683/comments | https://api.github.com/repos/huggingface/transformers/issues/37683/events | https://github.com/huggingface/transformers/issues/37683 | 3,011,581,459 | I_kwDOCUB6oc6zgRYT | 37,683 | Behaviour of `batch_eval_metrics` determines the `include_for_metrics` behaviour | {
"login": "prabhuteja12",
"id": 11191577,
"node_id": "MDQ6VXNlcjExMTkxNTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11191577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabhuteja12",
"html_url": "https://github.com/prabhuteja12",
"followers_url": "https://api.github.com/users/prabhuteja12/followers",
"following_url": "https://api.github.com/users/prabhuteja12/following{/other_user}",
"gists_url": "https://api.github.com/users/prabhuteja12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabhuteja12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabhuteja12/subscriptions",
"organizations_url": "https://api.github.com/users/prabhuteja12/orgs",
"repos_url": "https://api.github.com/users/prabhuteja12/repos",
"events_url": "https://api.github.com/users/prabhuteja12/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabhuteja12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T16:35:31 | 2025-08-21T16:56:24 | 2025-06-30T08:03:23 | NONE | null | null | null | null | ### System Info
Hello!
In the [`evaluation_loop`](https://github.com/huggingface/transformers/blob/v4.51.1/src/transformers/trainer.py#L4442) method, there is an interplay between the `batch_eval_metrics` and `include_for_metrics` arguments. When `include_for_metrics` is set to `inputs`, what is sent to `compute_metrics` is the `all_inputs` object which only contains the `input_ids` ) whereas when `batch_eval_metrics` is set to `True`, `compute_metrics` is called with `inputs` which includes the `input_ids` and also any other inputs that are passed to the model.
This behaviour is inconsistent. Can you please look into this?
@zach-huggingface @SunMarc
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Any trainer example with both
```python
include_for_metrics = ["inputs"]
batch_eval_metrics = True
```
should cause this.
### Expected behavior
Ideally, `batch_eval_metrics` shouldn't dictate how the `compute_metrics` is called. Can this inconsistency be fixed? | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37683/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37682/comments | https://api.github.com/repos/huggingface/transformers/issues/37682/events | https://github.com/huggingface/transformers/pull/37682 | 3,011,561,849 | PR_kwDOCUB6oc6TdvTf | 37,682 | FIX: Register image processing kwargs in DonutProcessor | {
"login": "mrseongminkim",
"id": 78520681,
"node_id": "MDQ6VXNlcjc4NTIwNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/78520681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrseongminkim",
"html_url": "https://github.com/mrseongminkim",
"followers_url": "https://api.github.com/users/mrseongminkim/followers",
"following_url": "https://api.github.com/users/mrseongminkim/following{/other_user}",
"gists_url": "https://api.github.com/users/mrseongminkim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrseongminkim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrseongminkim/subscriptions",
"organizations_url": "https://api.github.com/users/mrseongminkim/orgs",
"repos_url": "https://api.github.com/users/mrseongminkim/repos",
"events_url": "https://api.github.com/users/mrseongminkim/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrseongminkim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T16:26:08 | 2025-09-21T14:19:00 | 2025-09-21T14:19:00 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37682",
"html_url": "https://github.com/huggingface/transformers/pull/37682",
"diff_url": "https://github.com/huggingface/transformers/pull/37682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37682.patch",
"merged_at": null
} | # What does this PR do?
This PR addresses an issue where `DonutProcessor` ignores certain valid image preprocessing keyword arguments, such as `random_padding`, `do_thumbnail`, and `do_align_long_axis`, when they are passed directly during the processor call.
**Problem:**
When using the processor as shown below, the specified keyword arguments are ignored, and warning messages like `Keyword argument '...' is not a valid argument for this processor and will be ignored.` are displayed:
```python
from PIL import Image
from transformers import DonutProcessor
image = Image.open('hf.png')
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base")
processed_output = processor(image, random_padding=True, do_thumbnail=True, do_align_long_axis=True)
```
Solution:
This PR fixes the issue by explicitly adding these keyword arguments (random_padding, do_thumbnail, do_align_long_axis) to the set of arguments that DonutProcessor recognizes and can handle. This ensures that the processor accepts these arguments without warnings and correctly passes them down to the image processing component.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"login": "mrseongminkim",
"id": 78520681,
"node_id": "MDQ6VXNlcjc4NTIwNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/78520681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrseongminkim",
"html_url": "https://github.com/mrseongminkim",
"followers_url": "https://api.github.com/users/mrseongminkim/followers",
"following_url": "https://api.github.com/users/mrseongminkim/following{/other_user}",
"gists_url": "https://api.github.com/users/mrseongminkim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrseongminkim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrseongminkim/subscriptions",
"organizations_url": "https://api.github.com/users/mrseongminkim/orgs",
"repos_url": "https://api.github.com/users/mrseongminkim/repos",
"events_url": "https://api.github.com/users/mrseongminkim/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrseongminkim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37682/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37681/comments | https://api.github.com/repos/huggingface/transformers/issues/37681/events | https://github.com/huggingface/transformers/pull/37681 | 3,011,500,783 | PR_kwDOCUB6oc6TdiYO | 37,681 | refactor create_token_type_ids_from_sequences | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T15:58:13 | 2025-06-12T21:24:45 | 2025-06-12T21:24:43 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37681",
"html_url": "https://github.com/huggingface/transformers/pull/37681",
"diff_url": "https://github.com/huggingface/transformers/pull/37681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37681.patch",
"merged_at": "2025-06-12T21:24:43"
} | lots of duplicate code for `create_token_type_ids_from_sequences`, we can refactor by updating the base function!
However I think it should also be internalized (see https://github.com/huggingface/transformers/pull/37522) as it's only used internally by `prepare_for_model` for **slow tokenizers only**.
Questions:
- are users supposed to be using these function directly in any case? | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37681/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37680/comments | https://api.github.com/repos/huggingface/transformers/issues/37680/events | https://github.com/huggingface/transformers/issues/37680 | 3,011,476,209 | I_kwDOCUB6oc6zf3rx | 37,680 | Transformer pipelines erroneously invokes torch | {
"login": "rishic3",
"id": 77904151,
"node_id": "MDQ6VXNlcjc3OTA0MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/77904151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishic3",
"html_url": "https://github.com/rishic3",
"followers_url": "https://api.github.com/users/rishic3/followers",
"following_url": "https://api.github.com/users/rishic3/following{/other_user}",
"gists_url": "https://api.github.com/users/rishic3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishic3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishic3/subscriptions",
"organizations_url": "https://api.github.com/users/rishic3/orgs",
"repos_url": "https://api.github.com/users/rishic3/repos",
"events_url": "https://api.github.com/users/rishic3/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishic3/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T15:47:49 | 2025-04-24T09:31:36 | 2025-04-24T09:31:36 | NONE | null | null | null | null | ### System Info
As originally pointed out by @radoslav-dimitrov-indeavr (see [this comment](https://github.com/huggingface/transformers/pull/37307#issuecomment-2786494331)), PR #37307 adds a torch invocation to pipelines, which will cause pure tensorflow applications to break.
transformers/src/transformers/pipelines/base.py:
```
if torch.distributed.is_initialized():
```
[Source](https://github.com/huggingface/transformers/blame/530322ccb6d109ba1383e852b17729a2185d816b/src/transformers/pipelines/base.py#L984)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Repro: create an environment with transformers and tensorflow but without pytorch.
```python
classifier = pipeline("sentiment-analysis"
```
Output:
```text
NameError: name 'torch' is not defined
```
### Expected behavior
Given that pipelines are supported for pure Tensorflow applications, said applications should not require a pytorch installation. | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37680/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37679/comments | https://api.github.com/repos/huggingface/transformers/issues/37679/events | https://github.com/huggingface/transformers/issues/37679 | 3,011,461,004 | I_kwDOCUB6oc6zfz-M | 37,679 | This PR causes Transformers to error out when a model is using Tensorflow and the environment does not provide `torch` in any way | {
"login": "rishic3",
"id": 77904151,
"node_id": "MDQ6VXNlcjc3OTA0MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/77904151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishic3",
"html_url": "https://github.com/rishic3",
"followers_url": "https://api.github.com/users/rishic3/followers",
"following_url": "https://api.github.com/users/rishic3/following{/other_user}",
"gists_url": "https://api.github.com/users/rishic3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishic3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishic3/subscriptions",
"organizations_url": "https://api.github.com/users/rishic3/orgs",
"repos_url": "https://api.github.com/users/rishic3/repos",
"events_url": "https://api.github.com/users/rishic3/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishic3/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T15:41:34 | 2025-04-22T15:48:22 | 2025-04-22T15:48:22 | NONE | null | null | null | null | This PR causes Transformers to error out when a model is using Tensorflow and the environment does not provide `torch` in any way
transformers/src/transformers/pipelines/base.py:
```
if torch.distributed.is_initialized():
```
[Source](https://github.com/huggingface/transformers/blame/530322ccb6d109ba1383e852b17729a2185d816b/src/transformers/pipelines/base.py#L984)
_Originally posted by @radoslav-dimitrov-indeavr in https://github.com/huggingface/transformers/issues/37307#issuecomment-2786494331_
| {
"login": "rishic3",
"id": 77904151,
"node_id": "MDQ6VXNlcjc3OTA0MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/77904151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishic3",
"html_url": "https://github.com/rishic3",
"followers_url": "https://api.github.com/users/rishic3/followers",
"following_url": "https://api.github.com/users/rishic3/following{/other_user}",
"gists_url": "https://api.github.com/users/rishic3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishic3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishic3/subscriptions",
"organizations_url": "https://api.github.com/users/rishic3/orgs",
"repos_url": "https://api.github.com/users/rishic3/repos",
"events_url": "https://api.github.com/users/rishic3/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishic3/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37679/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37678/comments | https://api.github.com/repos/huggingface/transformers/issues/37678/events | https://github.com/huggingface/transformers/pull/37678 | 3,011,347,218 | PR_kwDOCUB6oc6TdBQ2 | 37,678 | Add maintainers for ROCm/Intel XPU/Ascend NPU | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T14:56:04 | 2025-04-23T13:28:34 | 2025-04-23T13:28:32 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37678",
"html_url": "https://github.com/huggingface/transformers/pull/37678",
"diff_url": "https://github.com/huggingface/transformers/pull/37678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37678.patch",
"merged_at": "2025-04-23T13:28:32"
} | We have a list of maintainers for various parts of the library in the issue template; this PR adds maintainers for backends like ROCm, XPU and Ascend NPU.
cc @ivarflakstad @IlyasMoutawwakil - let me know if you're okay with this! | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37678/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37678/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37677/comments | https://api.github.com/repos/huggingface/transformers/issues/37677/events | https://github.com/huggingface/transformers/pull/37677 | 3,011,342,604 | PR_kwDOCUB6oc6TdAQH | 37,677 | [docs] only build `en` docs in push CI | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T14:54:17 | 2025-04-22T16:05:16 | 2025-04-22T16:05:11 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37677",
"html_url": "https://github.com/huggingface/transformers/pull/37677",
"diff_url": "https://github.com/huggingface/transformers/pull/37677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37677.patch",
"merged_at": "2025-04-22T16:05:11"
} | # What does this PR do?
In our push CI, restrict the doc builder to `en`
- All languages: ~26 mins ([data point](https://github.com/huggingface/transformers/actions/runs/14597021526/job/40945447156))
- `en`: ~12 mins (this PR)
Pros:
- `en` has the full doc reference to class/methods (in other languages it is a copy)
- source of truth is tested at a fraction of the time
- `main` still builds all languages (different CI workflow)
Cons:
- PRs to other languages won’t build the corresponding docs at push time | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37677/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37676/comments | https://api.github.com/repos/huggingface/transformers/issues/37676/events | https://github.com/huggingface/transformers/pull/37676 | 3,011,297,782 | PR_kwDOCUB6oc6Tc2fs | 37,676 | [cleanup] remove old scripts in `/scripts` 🧹 🧹 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T14:38:19 | 2025-04-22T15:59:06 | 2025-04-22T15:59:03 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37676",
"html_url": "https://github.com/huggingface/transformers/pull/37676",
"diff_url": "https://github.com/huggingface/transformers/pull/37676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37676.patch",
"merged_at": "2025-04-22T15:59:03"
} | # What does this PR do?
Removes old scripts in `/scripts` that have no cross-references in other parts of the code base nor recent updates, as chatted with @LysandreJik
Scripts removed:
- `scripts/benchmark/trainer-benchmark.py`: many trainer flags were added since last relevant commit (>3 years ago)
- `scripts/deberta_scrtipt.py`: timing benchmark script related to old model, deberta v3 (`torch.jit`)
- `scripts/fsmt`: contains scripts with commands that no longer exist (`transformers-cli upload`) or tasks that we've automates since (tiny model creation)
- `scripts/pegasus/build_test_sample_spm_no_bos.py`: script to create a fixture file that we haven't touched in 5 years
- `scripts/tatoeba`: script to convert [Tatoeba-Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge) models (challenge that has long finished) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37676/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37675/comments | https://api.github.com/repos/huggingface/transformers/issues/37675/events | https://github.com/huggingface/transformers/pull/37675 | 3,010,945,289 | PR_kwDOCUB6oc6Tbp0g | 37,675 | Fix autoround docs | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T12:31:40 | 2025-04-22T13:33:15 | 2025-04-22T13:33:13 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37675",
"html_url": "https://github.com/huggingface/transformers/pull/37675",
"diff_url": "https://github.com/huggingface/transformers/pull/37675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37675.patch",
"merged_at": "2025-04-22T13:33:13"
} | # What does this PR do?
This PR adds autoround docs in the _toctree file | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37675/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37674/comments | https://api.github.com/repos/huggingface/transformers/issues/37674/events | https://github.com/huggingface/transformers/pull/37674 | 3,010,810,248 | PR_kwDOCUB6oc6TbMd6 | 37,674 | Update model card for Gemma | {
"login": "afafelwafi",
"id": 54381332,
"node_id": "MDQ6VXNlcjU0MzgxMzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/54381332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afafelwafi",
"html_url": "https://github.com/afafelwafi",
"followers_url": "https://api.github.com/users/afafelwafi/followers",
"following_url": "https://api.github.com/users/afafelwafi/following{/other_user}",
"gists_url": "https://api.github.com/users/afafelwafi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afafelwafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afafelwafi/subscriptions",
"organizations_url": "https://api.github.com/users/afafelwafi/orgs",
"repos_url": "https://api.github.com/users/afafelwafi/repos",
"events_url": "https://api.github.com/users/afafelwafi/events{/privacy}",
"received_events_url": "https://api.github.com/users/afafelwafi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T11:34:52 | 2025-04-24T16:58:46 | 2025-04-24T16:58:46 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37674",
"html_url": "https://github.com/huggingface/transformers/pull/37674",
"diff_url": "https://github.com/huggingface/transformers/pull/37674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37674.patch",
"merged_at": "2025-04-24T16:58:46"
} | # What does this PR do?
Update the model card for Gemma #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37674/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37673/comments | https://api.github.com/repos/huggingface/transformers/issues/37673/events | https://github.com/huggingface/transformers/pull/37673 | 3,010,741,614 | PR_kwDOCUB6oc6Ta9aY | 37,673 | Fix no_split_modules for Llama4 pretrained models | {
"login": "astefanutti",
"id": 366207,
"node_id": "MDQ6VXNlcjM2NjIwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/366207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astefanutti",
"html_url": "https://github.com/astefanutti",
"followers_url": "https://api.github.com/users/astefanutti/followers",
"following_url": "https://api.github.com/users/astefanutti/following{/other_user}",
"gists_url": "https://api.github.com/users/astefanutti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astefanutti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astefanutti/subscriptions",
"organizations_url": "https://api.github.com/users/astefanutti/orgs",
"repos_url": "https://api.github.com/users/astefanutti/repos",
"events_url": "https://api.github.com/users/astefanutti/events{/privacy}",
"received_events_url": "https://api.github.com/users/astefanutti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T11:05:33 | 2025-04-22T14:05:54 | 2025-04-22T14:05:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37673",
"html_url": "https://github.com/huggingface/transformers/pull/37673",
"diff_url": "https://github.com/huggingface/transformers/pull/37673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37673.patch",
"merged_at": "2025-04-22T14:05:13"
} | # What does this PR do?
This PR moves the definition of `_no_split_modules` for Llama4 pre-trained models from the base `Llama4PreTrainedModel` class to the subclasses so each model has the correct set of modules.
Otherwise loading the `Llama4ForCausalLM` currently fails because it doesn't have the `Llama4VisionEncoderLayer` module.
Fixes #37672
## Who can review?
@ArthurZucker @amyeroberts @qubvel | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37673/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37672/comments | https://api.github.com/repos/huggingface/transformers/issues/37672/events | https://github.com/huggingface/transformers/issues/37672 | 3,010,727,922 | I_kwDOCUB6oc6zdA_y | 37,672 | ValueError: Could not find the transformer layer class Llama4VisionEncoderLayer in the model | {
"login": "astefanutti",
"id": 366207,
"node_id": "MDQ6VXNlcjM2NjIwNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/366207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astefanutti",
"html_url": "https://github.com/astefanutti",
"followers_url": "https://api.github.com/users/astefanutti/followers",
"following_url": "https://api.github.com/users/astefanutti/following{/other_user}",
"gists_url": "https://api.github.com/users/astefanutti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astefanutti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astefanutti/subscriptions",
"organizations_url": "https://api.github.com/users/astefanutti/orgs",
"repos_url": "https://api.github.com/users/astefanutti/repos",
"events_url": "https://api.github.com/users/astefanutti/events{/privacy}",
"received_events_url": "https://api.github.com/users/astefanutti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T10:59:45 | 2025-04-22T14:05:14 | 2025-04-22T14:05:14 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers==4.51.3
Python version: 3.11
### Who can help?
@ArthurZucker @amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Load `Llama4ForCausalLM` model with FSDP auto-wrap policy enabled, e.g.:
```
model = Llama4ForCausalLM.from_pretrained("meta-llama/Llama-4-Scout-17B-16E-Instruct", torch_dtype="auto")
# Training
trainer = SFTTrainer(
# model=model_args.model_name_or_path,
model=model,
args=training_args,
...
)
```
This produces the following error:
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/tmp/tmp.RYY4AI2EBM/ephemeral_script.py", line 137, in <module>
[rank0]: main({'model_name_or_path': 'meta-llama/Llama-4-Scout-17B-16E-Instruct', 'model_revision': 'main', 'torch_dtype': 'bfloat16', 'attn_implementation': 'flex_attention', 'use_liger': False, 'use_peft': False, 'lora_r': 16, 'lora_alpha': 8, 'lora_dropout': 0.05, 'lora_target_modules': ['q_proj', 'v_proj', 'k_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], 'lora_modules_to_save': ['lm_head', 'embed_tokens'], 'load_in_4bit': False, 'load_in_8bit': False, 'dataset_name': 'gsm8k', 'dataset_config': 'main', 'dataset_train_split': 'train', 'dataset_test_split': 'test', 'dataset_text_field': 'text', 'dataset_kwargs': {'add_special_tokens': False, 'append_concat_token': False}, 'max_seq_length': 8192, 'dataset_batch_size': 1000, 'packing': False, 'padding_free': False, 'num_train_epochs': 10, 'per_device_train_batch_size': 64, 'per_device_eval_batch_size': 64, 'auto_find_batch_size': False, 'eval_strategy': 'epoch', 'bf16': True, 'tf32': False, 'learning_rate': 0.0002, 'warmup_steps': 10, 'lr_scheduler_type': 'inverse_sqrt', 'optim': 'adamw_torch_fused', 'max_grad_norm': 1.0, 'seed': 42, 'gradient_accumulation_steps': 1, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': {'use_reentrant': False}, 'fsdp': 'full_shard auto_wrap', 'fsdp_config': {'activation_checkpointing': True, 'cpu_ram_efficient_loading': False, 'sync_module_states': True, 'use_orig_params': True, 'limit_all_gathers': False}, 'save_strategy': 'no', 'save_total_limit': 1, 'resume_from_checkpoint': False, 'log_level': 'info', 'logging_strategy': 'steps', 'logging_steps': 1, 'report_to': ['tensorboard'], 'output_dir': '/mnt/shared/Llama-4-Scout-17B-16E-Instruct'})
[rank0]: File "/tmp/tmp.RYY4AI2EBM/ephemeral_script.py", line 130, in main
[rank0]: trainer.train(resume_from_checkpoint=checkpoint)
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/transformers/trainer.py", line 2238, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/transformers/trainer.py", line 2357, in _inner_training_loop
[rank0]: self.model = self.accelerator.prepare(self.model)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/accelerate/accelerator.py", line 1446, in prepare
[rank0]: result = tuple(
[rank0]: ^^^^^^
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/accelerate/accelerator.py", line 1447, in <genexpr>
[rank0]: self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/accelerate/accelerator.py", line 1289, in _prepare_one
[rank0]: return self.prepare_model(obj, device_placement=device_placement)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/accelerate/accelerator.py", line 1630, in prepare_model
[rank0]: self.state.fsdp_plugin.set_auto_wrap_policy(model)
[rank0]: File "/opt/app-root/lib64/python3.11/site-packages/accelerate/utils/dataclasses.py", line 1903, in set_auto_wrap_policy
[rank0]: raise ValueError(f"Could not find the transformer layer class {layer_class} in the model.")
[rank0]: ValueError: Could not find the transformer layer class Llama4VisionEncoderLayer in the model.
```
### Expected behavior
The model should load and be wrapped by FSDP successfully. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37672/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37671/comments | https://api.github.com/repos/huggingface/transformers/issues/37671/events | https://github.com/huggingface/transformers/issues/37671 | 3,010,674,417 | I_kwDOCUB6oc6zcz7x | 37,671 | `Model.from_pretrained` breaks when using SinusoidalEmbedding | {
"login": "ZhiyuanChen",
"id": 28757366,
"node_id": "MDQ6VXNlcjI4NzU3MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/28757366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhiyuanChen",
"html_url": "https://github.com/ZhiyuanChen",
"followers_url": "https://api.github.com/users/ZhiyuanChen/followers",
"following_url": "https://api.github.com/users/ZhiyuanChen/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhiyuanChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhiyuanChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhiyuanChen/subscriptions",
"organizations_url": "https://api.github.com/users/ZhiyuanChen/orgs",
"repos_url": "https://api.github.com/users/ZhiyuanChen/repos",
"events_url": "https://api.github.com/users/ZhiyuanChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhiyuanChen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T10:36:14 | 2025-06-13T14:01:05 | 2025-06-13T14:01:05 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.51.0
- Platform: macOS-15.3.1-arm64-arm-64bit
- Python version: 3.12.9
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
SinusoidalEmbedding does not require `state_dict`, and since there are some bugs related to loading/saving its states (#31387), a work around is to override its related functions:
```python
def state_dict(self, destination=None, prefix="", keep_vars=False):
return {}
def load_state_dict(self, *args, state_dict, strict=True):
return
def _load_from_state_dict(
self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
):
return
```
This used to works, but a recent update breaks it (Going back to transformers 4.50 worked fine).
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pip install multimolecule
Failure:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer, pipeline
from multimolecule.models import ErnieRnaForSecondaryStructurePrediction as Model
model = Model.from_pretrained("multimolecule/ernierna-ss")
model.to("cuda")
```
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Works:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer, pipeline
from multimolecule.models import ErnieRnaForSecondaryStructurePrediction as Model
model = Model(AutoConfig.from_pretrained("multimolecule/ernierna-ss"))
model.to("cuda")
```
### Expected behavior
No Error | {
"login": "ZhiyuanChen",
"id": 28757366,
"node_id": "MDQ6VXNlcjI4NzU3MzY2",
"avatar_url": "https://avatars.githubusercontent.com/u/28757366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhiyuanChen",
"html_url": "https://github.com/ZhiyuanChen",
"followers_url": "https://api.github.com/users/ZhiyuanChen/followers",
"following_url": "https://api.github.com/users/ZhiyuanChen/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhiyuanChen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhiyuanChen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhiyuanChen/subscriptions",
"organizations_url": "https://api.github.com/users/ZhiyuanChen/orgs",
"repos_url": "https://api.github.com/users/ZhiyuanChen/repos",
"events_url": "https://api.github.com/users/ZhiyuanChen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhiyuanChen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37671/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37670/comments | https://api.github.com/repos/huggingface/transformers/issues/37670/events | https://github.com/huggingface/transformers/pull/37670 | 3,010,648,108 | PR_kwDOCUB6oc6TapIa | 37,670 | Correct warm-up with fp8 | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T10:26:06 | 2025-04-22T11:21:02 | 2025-04-22T11:12:49 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37670",
"html_url": "https://github.com/huggingface/transformers/pull/37670",
"diff_url": "https://github.com/huggingface/transformers/pull/37670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37670.patch",
"merged_at": "2025-04-22T11:12:49"
} | # What does this PR do?
This PR starts the work of having fine-grained allocation when using quantizers. For now, it reflects the actual library behavior, while simply adjusting fp8 quantization (for Deepseek), but it should become more general in the future (i.e. probably the default should be 2 if most quantizers correctly pre-process? Did not check all for now), and be overriden only for a few.
cc @SunMarc @MekkCyber | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37670/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37669/comments | https://api.github.com/repos/huggingface/transformers/issues/37669/events | https://github.com/huggingface/transformers/pull/37669 | 3,010,635,127 | PR_kwDOCUB6oc6TamTQ | 37,669 | [docs] fix bug in quantization docs | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T10:20:47 | 2025-04-22T11:41:46 | 2025-04-22T11:18:04 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37669",
"html_url": "https://github.com/huggingface/transformers/pull/37669",
"diff_url": "https://github.com/huggingface/transformers/pull/37669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37669.patch",
"merged_at": null
} | # What does this PR do?
docbuilder is crashing because we're closing more sections than we open | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37669/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37668/comments | https://api.github.com/repos/huggingface/transformers/issues/37668/events | https://github.com/huggingface/transformers/pull/37668 | 3,010,592,049 | PR_kwDOCUB6oc6Tac7y | 37,668 | Refactor bitsandbytes doc | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T10:03:23 | 2025-04-22T14:13:27 | 2025-04-22T14:13:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37668",
"html_url": "https://github.com/huggingface/transformers/pull/37668",
"diff_url": "https://github.com/huggingface/transformers/pull/37668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37668.patch",
"merged_at": "2025-04-22T14:13:25"
} | # What does this PR do?
Refactor bitsandbytes docs for better readability | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37668/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37667/comments | https://api.github.com/repos/huggingface/transformers/issues/37667/events | https://github.com/huggingface/transformers/pull/37667 | 3,010,548,301 | PR_kwDOCUB6oc6TaTbp | 37,667 | Fix duplicated weights in fp8 quantization | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T09:46:07 | 2025-04-28T11:02:19 | 2025-04-22T11:12:27 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37667",
"html_url": "https://github.com/huggingface/transformers/pull/37667",
"diff_url": "https://github.com/huggingface/transformers/pull/37667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37667.patch",
"merged_at": "2025-04-22T11:12:27"
} | # What does this PR do?
https://github.com/huggingface/transformers/pull/35926 moved the fp8 params from buffers to parameters (which makes sense), but the quantizer itself was not updated, so the parameters would still be added as buffers leading to duplicated weights (they would live as both buffers and parameters).
Also, note that we SHOULD NEVER use `set_module_tensor_to_device` as it clears the cuda cache at each call which is really inefficient, and defeats all the purpose of cuda warmup we do in `from_pretrained`
cc @SunMarc @MekkCyber
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37667/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37666/comments | https://api.github.com/repos/huggingface/transformers/issues/37666/events | https://github.com/huggingface/transformers/pull/37666 | 3,010,449,548 | PR_kwDOCUB6oc6TZ-G5 | 37,666 | Refine parameter type annotations | {
"login": "flashJd",
"id": 23452216,
"node_id": "MDQ6VXNlcjIzNDUyMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/23452216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flashJd",
"html_url": "https://github.com/flashJd",
"followers_url": "https://api.github.com/users/flashJd/followers",
"following_url": "https://api.github.com/users/flashJd/following{/other_user}",
"gists_url": "https://api.github.com/users/flashJd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flashJd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flashJd/subscriptions",
"organizations_url": "https://api.github.com/users/flashJd/orgs",
"repos_url": "https://api.github.com/users/flashJd/repos",
"events_url": "https://api.github.com/users/flashJd/events{/privacy}",
"received_events_url": "https://api.github.com/users/flashJd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T09:08:39 | 2025-04-24T14:37:14 | 2025-04-24T14:37:14 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37666",
"html_url": "https://github.com/huggingface/transformers/pull/37666",
"diff_url": "https://github.com/huggingface/transformers/pull/37666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37666.patch",
"merged_at": "2025-04-24T14:37:14"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
https://github.com/huggingface/transformers/pull/36644 this pr is blocked in `Checking for the ability to merge automatically`, maybe github issue, so I closed it and opened a new pr.
This PR refines the parameter type annotations in the add_tokens and add_special_tokens methods.special_tokens_dict parameter accept list or tuple when key is additional_special_tokens, but parameter type not covered.
The goal is to make the parameter type more clear
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37666/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37665/comments | https://api.github.com/repos/huggingface/transformers/issues/37665/events | https://github.com/huggingface/transformers/pull/37665 | 3,010,324,523 | PR_kwDOCUB6oc6TZjLn | 37,665 | [tests] fix `test_nemotron_8b_generation_sdpa` | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-22T08:21:35 | 2025-04-24T09:28:36 | 2025-04-24T09:28:35 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37665",
"html_url": "https://github.com/huggingface/transformers/pull/37665",
"diff_url": "https://github.com/huggingface/transformers/pull/37665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37665.patch",
"merged_at": "2025-04-24T09:28:35"
} | ## What does this PR do?
Not sure whether the model got updated, but `test_nemotron_8b_generation_sdpa` fails on CUDA. This PR fixes this by adding `max_new_tokens` to make it pass. Otherwise, you will get the following error:
```bash
def test_nemotron_8b_generation_sdpa(self):
text = ["What is the largest planet in solar system?"]
EXPECTED_TEXT = [
"What is the largest planet in solar system?\nAnswer: Jupiter\n\nWhat is the answer",
]
model_id = "thhaus/nemotron3-8b"
model = NemotronForCausalLM.from_pretrained(
model_id, torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer(text, return_tensors="pt").to(torch_device)
output = model.generate(**inputs, do_sample=False)
output_text = tokenizer.batch_decode(output, skip_special_tokens=True)
> self.assertEqual(EXPECTED_TEXT, output_text)
E AssertionError: Lists differ: ['Wha[46 chars]er: Jupiter\n\nWhat is the answer'] != ['Wha[46 chars]er: Jupiter\n\nWhat is the answer: What is the name of the 19']
E
E First differing element 0:
E 'What[44 chars]wer: Jupiter\n\nWhat is the answer'
E 'What[44 chars]wer: Jupiter\n\nWhat is the answer: What is the name of the 19'
E
E ['What is the largest planet in solar system?\n'
E 'Answer: Jupiter\n'
E '\n'
E - 'What is the answer']
E + 'What is the answer: What is the name of the 19']
tests/models/nemotron/test_modeling_nemotron.py:200: AssertionError
```
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37665/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37664/comments | https://api.github.com/repos/huggingface/transformers/issues/37664/events | https://github.com/huggingface/transformers/issues/37664 | 3,009,767,014 | I_kwDOCUB6oc6zZWZm | 37,664 | System kills the processes of llama2-70B fsdp finetune when loading the model | {
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T03:38:27 | 2025-05-01T14:41:18 | 2025-04-23T06:01:57 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers version: 4.52.0.dev0(fee11906) latest main
- `transformers` version: 4.52.0.dev0
- Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- fsdp_config: {'fsdp_activation_checkpointing': False, 'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch': 'BACKWARD_PRE', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_for
ward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_reshard_after_forward': 'FULL_SHARD', 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wr
ap': '', 'fsdp_use_orig_params': True, 'fsdp_version': 1}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.16.1+hpu.synapse.v1.20.0
- PyTorch version (GPU?): 2.6.0+hpu_1.20.0-543.git4952fce (False)
- Tensorflow version (GPU?): 2.15.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using distributed or parallel set-up in script?: <fill in>
- Using HPU in script?: <fill in>
- HPU type: GAUDI2
### Who can help?
@ArthurZucker @SunMarc @zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
https://github.com/yuanwu2017/llm-dbg/tree/main/finetune
**1. run 8 HPUs fsdp finetune with llama2-70b:**
`accelerate launch --config_file hpu_config_fsdp.yaml run_lora_clm.py --model_name_or_path meta-llama/Llama-2-70b-hf --dataset_name tatsu-lab/alpaca --bf16 True --output_dir ./olora --max_seq_len 2048 --gradient_checkpointing --per_device_train_batch_size 5 --save_strategy no --learning_rate 0.0004 --warmup_ratio 0.03 --lr_scheduler_type "constant" --logging_steps 1 --dataset_concatenation --do_train --lora_rank 4 --lora_target_modules "q_proj" "v_proj" "k_proj" "o_proj" --validation_split_percentage 4 --fsdp auto_wrap --fsdp_config ./fsdp_config.json --num_train_epochs 2 --eval_strategy epoch --per_device_eval_batch_size 1 --eval_delay 2 --do_eval --torch_compile --gradient_accumulation_steps 2`
**System kills the processes of finetune**
In latest code, the [low_cpu_mem_usage](https://github.com/huggingface/transformers/blob/fee1190601b5d04ec6d3f7f58fd22788d7f3236d/src/transformers/modeling_utils.py#L4036) is removed. The model was loaded 8 times in CPU memory. Each process loaded a model. The CPU's memory was exhausted. The system monitor killed the processes of finetune.
<img width="833" alt="Image" src="https://github.com/user-attachments/assets/9f3b72d9-275e-4fc8-b3eb-53d990e7f15c" />
<img width="839" alt="Image" src="https://github.com/user-attachments/assets/504b49cd-ca59-4dc2-86ab-a3eb2bc9cdcc" />
### Expected behavior
Run without errors. I tried the finetune with transformers <=4.50.3, it can work without error. | {
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37664/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37663/comments | https://api.github.com/repos/huggingface/transformers/issues/37663/events | https://github.com/huggingface/transformers/issues/37663 | 3,009,720,609 | I_kwDOCUB6oc6zZLEh | 37,663 | Distributed loading error with from_pretrained for tp_plan is None | {
"login": "kcz358",
"id": 92624596,
"node_id": "U_kgDOBYVW1A",
"avatar_url": "https://avatars.githubusercontent.com/u/92624596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kcz358",
"html_url": "https://github.com/kcz358",
"followers_url": "https://api.github.com/users/kcz358/followers",
"following_url": "https://api.github.com/users/kcz358/following{/other_user}",
"gists_url": "https://api.github.com/users/kcz358/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kcz358/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kcz358/subscriptions",
"organizations_url": "https://api.github.com/users/kcz358/orgs",
"repos_url": "https://api.github.com/users/kcz358/repos",
"events_url": "https://api.github.com/users/kcz358/events{/privacy}",
"received_events_url": "https://api.github.com/users/kcz358/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-22T03:00:00 | 2025-04-24T12:56:54 | 2025-04-24T12:56:54 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.0.dev0
- Platform: Linux-5.15.120.bsk.2-amd64-x86_64-with-glibc2.36
- Python version: 3.10.16
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: True
- Using GPU in script?: True
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@zucchini-nlp @amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Minimal Code snippet
```python
import torch.distributed as dist
import torch
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
def setup():
dist.init_process_group(backend="nccl")
local_rank = dist.get_rank() % dist.get_world_size()
world_size = dist.get_world_size()
torch.cuda.set_device(local_rank)
return local_rank, world_size
def cleanup():
dist.destroy_process_group()
def main():
local_rank, world_size = setup()
model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
cleanup()
if __name__ == "__main__":
main()
```
Then, running with
```bash
torchrun --nproc_per_node="8" \
--nnodes="1" \
--node_rank="0" \
--master_addr="127.0.0.1" \
--master_port="12345" \
reproduce_tp_script.py
```
Error is like this:
```bash
[W422 02:54:33.854949743 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.872220475 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.046816487 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.118141019 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.245757189 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.367135370 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.496151955 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
[W422 02:54:33.503352722 Utils.hpp:165] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][rank6]: Traceback (most recent call last):
[rank6]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank6]: main()
[rank6]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank6]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank6]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank6]: model = super().from_pretrained(
[rank6]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank6]: return func(*args, **kwargs)
[rank6]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank6]: ) = cls._load_pretrained_model(
[rank6]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank6]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank6]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank6]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank6]: TypeError: 'NoneType' object is not iterable
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s]
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s]Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
[rank5]: Traceback (most recent call last):
[rank5]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank5]: main()
[rank5]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank5]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank5]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank5]: model = super().from_pretrained(
[rank5]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank5]: return func(*args, **kwargs)
[rank5]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank5]: ) = cls._load_pretrained_model(
[rank5]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank5]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank5]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank5]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank5]: TypeError: 'NoneType' object is not iterable
Unrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_section'}
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Qwen2_5OmniToken2WavModel does not support eager attention implementation, fall back to sdpa
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][rank7]: Traceback (most recent call last):
[rank7]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank7]: main()
[rank7]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank7]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank7]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank7]: model = super().from_pretrained(
[rank7]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank7]: return func(*args, **kwargs)
[rank7]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank7]: ) = cls._load_pretrained_model(
[rank7]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank7]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank7]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank7]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank7]: TypeError: 'NoneType' object is not iterable
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][rank1]: Traceback (most recent call last):
[rank1]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank1]: main()
[rank1]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank1]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank1]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank1]: model = super().from_pretrained(
[rank1]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank1]: ) = cls._load_pretrained_model(
[rank1]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank1]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank1]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank1]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank1]: TypeError: 'NoneType' object is not iterable
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s]
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][rank2]: Traceback (most recent call last):
[rank2]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank2]: main()
[rank2]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank2]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank2]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank2]: model = super().from_pretrained(
[rank2]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank2]: ) = cls._load_pretrained_model(
[rank2]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank2]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank2]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank2]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank2]: TypeError: 'NoneType' object is not iterable
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s][rank3]: Traceback (most recent call last):
[rank3]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 23, in <module>
[rank3]: main()
[rank3]: File "/opt/tiger/dev/lmms-eval/scripts/reproduce_tp_script.py", line 19, in main
[rank3]: model = Qwen2_5OmniForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-Omni-7B", device_map = f"cuda:{local_rank}", torch_dtype="auto")
[rank3]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 4421, in from_pretrained
[rank3]: model = super().from_pretrained(
[rank3]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained
[rank3]: ) = cls._load_pretrained_model(
[rank3]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4869, in _load_pretrained_model
[rank3]: caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4)
[rank3]: File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5901, in caching_allocator_warmup
[rank3]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank3]: TypeError: 'NoneType' object is not iterable
Loading checkpoint shards: 0%| | 0/5 [00:00<?, ?it/s]
W0422 02:54:36.514000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100799 closing signal SIGTERM
W0422 02:54:36.515000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100800 closing signal SIGTERM
W0422 02:54:36.516000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100801 closing signal SIGTERM
W0422 02:54:36.516000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100802 closing signal SIGTERM
W0422 02:54:36.517000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100803 closing signal SIGTERM
W0422 02:54:36.517000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100804 closing signal SIGTERM
W0422 02:54:36.518000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 100806 closing signal SIGTERM
E0422 02:54:37.578000 100787 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 6 (pid: 100805) of binary: /home/tiger/miniconda3/envs/reproduce/bin/python3
Traceback (most recent call last):
File "/home/tiger/miniconda3/envs/reproduce/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/torch/distributed/run.py", line 918, in main
run(args)
File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/tiger/miniconda3/envs/reproduce/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
scripts/reproduce_tp_script.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-04-22_02:54:36
host : n124-174-015.byted.org
rank : 6 (local_rank: 6)
exitcode : 1 (pid: 100805)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Expected behavior
Under the example scripts in [here](Qwen/Qwen2.5-Omni-7B), the model is loaded without torch distributed so everything is great. However, under the distributed cases, with a model with `tp_plan` is None, the `caching_allocator_warmup` here
https://github.com/huggingface/transformers/blob/fee1190601b5d04ec6d3f7f58fd22788d7f3236d/src/transformers/modeling_utils.py#L5874-L5904
will try to iterate over a NoneType _tp_plan and cause this error.
This bugs is potentially exists in all distributed environment with no tp_plan and every model such as Qwen2Audio with a tp_plan does not effect from this bugs. To fix this bug I believe is simple, just add a check before iteration and assign _tp_plan with an empty list would solve the bug.
My temporary solution is to manually hack this line before the `from_pretrained`
```python
Qwen2_5OmniForConditionalGeneration._tp_plan = []
``` | {
"login": "kcz358",
"id": 92624596,
"node_id": "U_kgDOBYVW1A",
"avatar_url": "https://avatars.githubusercontent.com/u/92624596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kcz358",
"html_url": "https://github.com/kcz358",
"followers_url": "https://api.github.com/users/kcz358/followers",
"following_url": "https://api.github.com/users/kcz358/following{/other_user}",
"gists_url": "https://api.github.com/users/kcz358/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kcz358/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kcz358/subscriptions",
"organizations_url": "https://api.github.com/users/kcz358/orgs",
"repos_url": "https://api.github.com/users/kcz358/repos",
"events_url": "https://api.github.com/users/kcz358/events{/privacy}",
"received_events_url": "https://api.github.com/users/kcz358/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37663/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37662/comments | https://api.github.com/repos/huggingface/transformers/issues/37662/events | https://github.com/huggingface/transformers/pull/37662 | 3,009,515,974 | PR_kwDOCUB6oc6TW19P | 37,662 | enable blip2 and emu3 cases on XPU | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T23:54:20 | 2025-04-22T23:03:34 | 2025-04-22T16:37:09 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37662",
"html_url": "https://github.com/huggingface/transformers/pull/37662",
"diff_url": "https://github.com/huggingface/transformers/pull/37662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37662.patch",
"merged_at": "2025-04-22T16:37:09"
} | 5 cases:
FAILED tests/models/emu3/test_modeling_emu3.py::Emu3IntegrationTest::test_model_generate_images
PASSED tests/models/emu3/test_modeling_emu3.py::Emu3IntegrationTest::test_model_generation_batched
PASSED tests/models/emu3/test_modeling_emu3.py::Emu3IntegrationTest::test_model_generation_multi_image
PASSED tests/models/blip_2/test_modeling_blip_2.py::Blip2TextRetrievalModelTest::test_model_from_pretrained
PASSED tests/models/blip_2/test_modeling_blip_2.py::Blip2VisionModelWithProjectionTest::test_model_from_pretrained
will fix 1 failed case in separate PR | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37662/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37661/comments | https://api.github.com/repos/huggingface/transformers/issues/37661/events | https://github.com/huggingface/transformers/pull/37661 | 3,009,202,920 | PR_kwDOCUB6oc6TVyIV | 37,661 | add fast image processor nougat | {
"login": "NahieliV",
"id": 54726691,
"node_id": "MDQ6VXNlcjU0NzI2Njkx",
"avatar_url": "https://avatars.githubusercontent.com/u/54726691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NahieliV",
"html_url": "https://github.com/NahieliV",
"followers_url": "https://api.github.com/users/NahieliV/followers",
"following_url": "https://api.github.com/users/NahieliV/following{/other_user}",
"gists_url": "https://api.github.com/users/NahieliV/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NahieliV/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NahieliV/subscriptions",
"organizations_url": "https://api.github.com/users/NahieliV/orgs",
"repos_url": "https://api.github.com/users/NahieliV/repos",
"events_url": "https://api.github.com/users/NahieliV/events{/privacy}",
"received_events_url": "https://api.github.com/users/NahieliV/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T20:03:26 | 2025-06-27T14:39:43 | 2025-06-27T14:39:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37661",
"html_url": "https://github.com/huggingface/transformers/pull/37661",
"diff_url": "https://github.com/huggingface/transformers/pull/37661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37661.patch",
"merged_at": "2025-06-27T14:39:43"
} | # What does this PR do?
Adds fast image processor for Nougat model.
https://github.com/huggingface/transformers/issues/36978
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@yonigozlan
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37661/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37660/comments | https://api.github.com/repos/huggingface/transformers/issues/37660/events | https://github.com/huggingface/transformers/pull/37660 | 3,009,189,217 | PR_kwDOCUB6oc6TVvFh | 37,660 | Qwen 2.5 Omni: apply video defaults | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T19:56:11 | 2025-04-28T09:29:38 | 2025-04-23T15:08:11 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37660",
"html_url": "https://github.com/huggingface/transformers/pull/37660",
"diff_url": "https://github.com/huggingface/transformers/pull/37660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37660.patch",
"merged_at": "2025-04-23T15:08:11"
} | # What does this PR do?
Applies `min_pixels` and `max_pixels` values to video processor.
The values were taken from [the original processing codebase](https://github.com/QwenLM/Qwen2.5-Omni/blob/7c8dddb38d52a58ce57e778e10fa0eaf26e078e9/qwen-omni-utils/src/qwen_omni_utils/v2_5/vision_process.py#L30), which uses a different set for [video](https://github.com/QwenLM/Qwen2.5-Omni/blob/7c8dddb38d52a58ce57e778e10fa0eaf26e078e9/qwen-omni-utils/src/qwen_omni_utils/v2_5/vision_process.py#L30) than it does for [images](https://github.com/QwenLM/Qwen2.5-Omni/blob/7c8dddb38d52a58ce57e778e10fa0eaf26e078e9/qwen-omni-utils/src/qwen_omni_utils/v2_5/vision_process.py#L26).
In our case, the image processor would always default to the image case, which results in frames resized to very large sizes, possibly causing OOMs, and preparing inputs with shapes not seen by the model during training.
## Reproduction
Consider the following snippet:
```py
import soundfile as sf
from transformers import Qwen2_5OmniForConditionalGeneration, Qwen2_5OmniProcessor
model_id = "Qwen/Qwen2.5-Omni-7B"
processor = Qwen2_5OmniProcessor.from_pretrained(model_id)
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen, a virtual human developed by the Qwen Team, Alibaba Group, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
{"type": "text", "text": "What can you hear and see in this video?"},
],
},
]
inputs = processor.apply_chat_template(
conversation,
load_audio_from_video=True,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
video_fps=2,
# kwargs to be passed to `Qwen2-5-OmniProcessor`
padding=True,
use_audio_in_video=True,
)
print(inputs["pixel_values_videos"].shape)
```
* Before this PR: `[886116, 1176]`
* After this PR: `[60480, 1176]`
* Reference, using qwen_omni_utils.process_mm_info: `[57600, 1176]`
The difference between this PR and the reference is because the original codebase selects `40` frames for this video, while we select `41`.
## Alternatives
* Use different config values for image and video processing and persist them to `preprocessor_config.json`.
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37660/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37660/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37659/comments | https://api.github.com/repos/huggingface/transformers/issues/37659/events | https://github.com/huggingface/transformers/issues/37659 | 3,009,014,017 | I_kwDOCUB6oc6zWekB | 37,659 | Avoid adding space when decoding tokenization | {
"login": "cikay",
"id": 24587499,
"node_id": "MDQ6VXNlcjI0NTg3NDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/24587499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cikay",
"html_url": "https://github.com/cikay",
"followers_url": "https://api.github.com/users/cikay/followers",
"following_url": "https://api.github.com/users/cikay/following{/other_user}",
"gists_url": "https://api.github.com/users/cikay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cikay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cikay/subscriptions",
"organizations_url": "https://api.github.com/users/cikay/orgs",
"repos_url": "https://api.github.com/users/cikay/repos",
"events_url": "https://api.github.com/users/cikay/events{/privacy}",
"received_events_url": "https://api.github.com/users/cikay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-21T18:21:03 | 2025-04-21T18:21:49 | null | NONE | null | null | null | null | ### Feature request
Hi, I trained a tokenizers. Tokens contain spaces as well. When I decode the decode method add space between tokens and it makes it wrong I need to avoid them. How to do that?
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("muzaffercky/kurdish-kurmanji-tokenizer", revision="v1.0")
test_text = """
Ez ê di vê gotarê da qala ên ku ez guhdar û temaşe dikim bikim
"""
tokens = tokenizer.tokenize(test_text)
print(f"Tokens: {tokens}")
output: Tokens: ['\n', 'Ez ê ', 'di vê ', 'got', 'arê ', 'da ', 'qala ', 'ên ku ', 'ez ', 'guh', 'dar û ', 'temaşe ', 'dikim ', 'bikim', '\n']
ids = tokenizer.encode(test_text)
print(f"IDs: {ids}")
output: IDs: [6, 6271, 1323, 452, 462, 396, 2409, 566, 654, 1204, 3278, 4543, 7880, 7595, 6]
text = tokenizer.decode(ids)
print(f"text: {text}")
output:
text:
Ez ê di vê got arê da qala ên ku ez guh dar û temaşe dikim bikim
As you can see it add extra space between tokens when decoding. I know I can make something like below but I am curious if transformer support something like this built-in
individual_tokens = [tokenizer.decode([id]) for id in ids]
"".join(individual_tokens)
### Motivation
Not writing custom code to avoid adding space between tokens
### Your contribution
No | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37659/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37658/comments | https://api.github.com/repos/huggingface/transformers/issues/37658/events | https://github.com/huggingface/transformers/pull/37658 | 3,008,715,146 | PR_kwDOCUB6oc6TUJPd | 37,658 | Add GraniteMoeHybrid support for 4.0 | {
"login": "Ssukriti",
"id": 16025630,
"node_id": "MDQ6VXNlcjE2MDI1NjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/16025630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ssukriti",
"html_url": "https://github.com/Ssukriti",
"followers_url": "https://api.github.com/users/Ssukriti/followers",
"following_url": "https://api.github.com/users/Ssukriti/following{/other_user}",
"gists_url": "https://api.github.com/users/Ssukriti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ssukriti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ssukriti/subscriptions",
"organizations_url": "https://api.github.com/users/Ssukriti/orgs",
"repos_url": "https://api.github.com/users/Ssukriti/repos",
"events_url": "https://api.github.com/users/Ssukriti/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ssukriti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T15:43:06 | 2025-05-06T04:47:43 | 2025-05-06T04:47:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37658",
"html_url": "https://github.com/huggingface/transformers/pull/37658",
"diff_url": "https://github.com/huggingface/transformers/pull/37658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37658.patch",
"merged_at": "2025-05-06T04:47:43"
} | # What does this PR do?
The PR adds support for upcoming Granite4.0 models. It terms of model architecture, it is a hybrid class with shared MLP layer and Bamba layers.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37658/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37658/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37657/comments | https://api.github.com/repos/huggingface/transformers/issues/37657/events | https://github.com/huggingface/transformers/pull/37657 | 3,008,646,133 | PR_kwDOCUB6oc6TT52d | 37,657 | Transformers cli clean command | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T15:13:28 | 2025-04-30T11:15:45 | 2025-04-30T11:15:44 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37657",
"html_url": "https://github.com/huggingface/transformers/pull/37657",
"diff_url": "https://github.com/huggingface/transformers/pull/37657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37657.patch",
"merged_at": "2025-04-30T11:15:44"
} | This updates the `transformers-cli` commands towards a cleaner `transformers` command.
It also updates the `chat` command to accept the model name or path as a positional argument as well as keyword argument.
This allows the following command:
```
transformers-cli chat --model_name_or_path Qwen/Qwen2.5-3B-Instruct
```
to become
```
transformers chat Qwen/Qwen2.5-3B-Instruct
```
A follow-up will be done for the `run` command (which is a different method than the `chat` command) to run efficiently. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37657/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37657/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37656/comments | https://api.github.com/repos/huggingface/transformers/issues/37656/events | https://github.com/huggingface/transformers/pull/37656 | 3,008,493,256 | PR_kwDOCUB6oc6TTYbD | 37,656 | [internvl] fix chat template | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T14:08:24 | 2025-04-23T14:56:37 | 2025-04-23T14:56:37 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37656",
"html_url": "https://github.com/huggingface/transformers/pull/37656",
"diff_url": "https://github.com/huggingface/transformers/pull/37656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37656.patch",
"merged_at": "2025-04-23T14:56:37"
} | # What does this PR do?
As per title, loading video with custom sampling has been changed recently and InternVL didn't have the update. Also PR modifies `content_image_token -> image_token`
@yonigozlan ideally we would have the same `fake_image_token` and `image_token`, which is used in vLLM to construct dummy text inputs. If we change it now, it won't be breaking as there was no release yet, WDYT? It will need updates in hub configs and chat template probably
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37656/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37655/comments | https://api.github.com/repos/huggingface/transformers/issues/37655/events | https://github.com/huggingface/transformers/pull/37655 | 3,008,457,694 | PR_kwDOCUB6oc6TTQsU | 37,655 | typo update in the parameter name | {
"login": "LunaticMaestro",
"id": 38010046,
"node_id": "MDQ6VXNlcjM4MDEwMDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/38010046?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LunaticMaestro",
"html_url": "https://github.com/LunaticMaestro",
"followers_url": "https://api.github.com/users/LunaticMaestro/followers",
"following_url": "https://api.github.com/users/LunaticMaestro/following{/other_user}",
"gists_url": "https://api.github.com/users/LunaticMaestro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LunaticMaestro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LunaticMaestro/subscriptions",
"organizations_url": "https://api.github.com/users/LunaticMaestro/orgs",
"repos_url": "https://api.github.com/users/LunaticMaestro/repos",
"events_url": "https://api.github.com/users/LunaticMaestro/events{/privacy}",
"received_events_url": "https://api.github.com/users/LunaticMaestro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T13:48:58 | 2025-04-22T16:14:21 | 2025-04-22T16:14:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37655",
"html_url": "https://github.com/huggingface/transformers/pull/37655",
"diff_url": "https://github.com/huggingface/transformers/pull/37655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37655.patch",
"merged_at": "2025-04-22T16:14:20"
} | See L118 and L143 for the class attribute `hidden_dim`
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37655/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37654/comments | https://api.github.com/repos/huggingface/transformers/issues/37654/events | https://github.com/huggingface/transformers/pull/37654 | 3,008,426,015 | PR_kwDOCUB6oc6TTJ3R | 37,654 | [Fix] InternVL3 automodel & InternVLProcessor import | {
"login": "quoccuongLE",
"id": 26406021,
"node_id": "MDQ6VXNlcjI2NDA2MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/26406021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quoccuongLE",
"html_url": "https://github.com/quoccuongLE",
"followers_url": "https://api.github.com/users/quoccuongLE/followers",
"following_url": "https://api.github.com/users/quoccuongLE/following{/other_user}",
"gists_url": "https://api.github.com/users/quoccuongLE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quoccuongLE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quoccuongLE/subscriptions",
"organizations_url": "https://api.github.com/users/quoccuongLE/orgs",
"repos_url": "https://api.github.com/users/quoccuongLE/repos",
"events_url": "https://api.github.com/users/quoccuongLE/events{/privacy}",
"received_events_url": "https://api.github.com/users/quoccuongLE/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T13:32:23 | 2025-04-24T09:45:35 | 2025-04-24T09:45:34 | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37654",
"html_url": "https://github.com/huggingface/transformers/pull/37654",
"diff_url": "https://github.com/huggingface/transformers/pull/37654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37654.patch",
"merged_at": null
} | # What does this PR do?
This PR is to fix a model import problem mentioned in #37595 relating InternVL3 models
Fixes # (issue)
* Fixed loading InternVL model via AutoModel by adding ("internvl", "InternVLForConditionalGeneration") in MODEL_MAPPING_NAMES
* Fixed loading InternVLProcessor error while attributes "start_image_token", "end_image_token" and "context_image_token" missing from tokenizer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Especically, I encourage anyone in HuggingFace team, InternVL team to review my PR. | {
"login": "quoccuongLE",
"id": 26406021,
"node_id": "MDQ6VXNlcjI2NDA2MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/26406021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/quoccuongLE",
"html_url": "https://github.com/quoccuongLE",
"followers_url": "https://api.github.com/users/quoccuongLE/followers",
"following_url": "https://api.github.com/users/quoccuongLE/following{/other_user}",
"gists_url": "https://api.github.com/users/quoccuongLE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/quoccuongLE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/quoccuongLE/subscriptions",
"organizations_url": "https://api.github.com/users/quoccuongLE/orgs",
"repos_url": "https://api.github.com/users/quoccuongLE/repos",
"events_url": "https://api.github.com/users/quoccuongLE/events{/privacy}",
"received_events_url": "https://api.github.com/users/quoccuongLE/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37654/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37653/comments | https://api.github.com/repos/huggingface/transformers/issues/37653/events | https://github.com/huggingface/transformers/pull/37653 | 3,008,367,574 | PR_kwDOCUB6oc6TS9PX | 37,653 | Non model inits | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-21T12:59:57 | 2025-06-02T09:21:06 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37653",
"html_url": "https://github.com/huggingface/transformers/pull/37653",
"diff_url": "https://github.com/huggingface/transformers/pull/37653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37653.patch",
"merged_at": null
} | Handle the inits outside of models | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37653/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37653/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37652/comments | https://api.github.com/repos/huggingface/transformers/issues/37652/events | https://github.com/huggingface/transformers/pull/37652 | 3,008,325,684 | PR_kwDOCUB6oc6TS0B2 | 37,652 | fix link in kv_cache.md | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-04-21T12:33:58 | 2025-04-21T16:01:12 | 2025-04-21T16:01:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37652",
"html_url": "https://github.com/huggingface/transformers/pull/37652",
"diff_url": "https://github.com/huggingface/transformers/pull/37652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37652.patch",
"merged_at": "2025-04-21T16:01:11"
} | Fix link reference in cache docs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37652/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37651/comments | https://api.github.com/repos/huggingface/transformers/issues/37651/events | https://github.com/huggingface/transformers/pull/37651 | 3,008,280,348 | PR_kwDOCUB6oc6TSqGa | 37,651 | Add test to ensure unknown exceptions reraising in utils/hub.py::cached_files() | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T12:06:00 | 2025-04-22T09:38:17 | 2025-04-22T09:38:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37651",
"html_url": "https://github.com/huggingface/transformers/pull/37651",
"diff_url": "https://github.com/huggingface/transformers/pull/37651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37651.patch",
"merged_at": "2025-04-22T09:38:11"
} | After fixing #37477 with #37525, a proper test was pending, so we ensure unknown Exceptions are properly re-raised.
Thanks again Cyril for the helpful review and guidance in the previous PR! Let me know if the test is adequate :)
## Who can review?
@Cyrilvallez | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37651/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37650/comments | https://api.github.com/repos/huggingface/transformers/issues/37650/events | https://github.com/huggingface/transformers/pull/37650 | 3,008,239,073 | PR_kwDOCUB6oc6TShkx | 37,650 | Fixing quantization tests | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T11:41:46 | 2025-04-22T11:59:58 | 2025-04-22T11:59:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37650",
"html_url": "https://github.com/huggingface/transformers/pull/37650",
"diff_url": "https://github.com/huggingface/transformers/pull/37650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37650.patch",
"merged_at": "2025-04-22T11:59:57"
} | # What does this PR do?
Fixes many quantization failling tests | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37650/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37649/comments | https://api.github.com/repos/huggingface/transformers/issues/37649/events | https://github.com/huggingface/transformers/pull/37649 | 3,008,189,484 | PR_kwDOCUB6oc6TSWpt | 37,649 | Support loading Gemma3 QAT GGUF models | {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T11:13:22 | 2025-04-22T15:49:34 | 2025-04-22T09:23:17 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37649",
"html_url": "https://github.com/huggingface/transformers/pull/37649",
"diff_url": "https://github.com/huggingface/transformers/pull/37649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37649.patch",
"merged_at": "2025-04-22T09:23:17"
} | # What does this PR do?
originally reported in vllm: https://github.com/vllm-project/vllm/issues/14723#issuecomment-2817093067
- Gemma3 QAT GGUF checkpoint doesn't have `general.basename` field in metadata
- This PR handles this edge case to avoid `KeyError: 'general.name'` raising
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SunMarc @MekkCyber
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37649/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37649/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37648/comments | https://api.github.com/repos/huggingface/transformers/issues/37648/events | https://github.com/huggingface/transformers/issues/37648 | 3,007,874,390 | I_kwDOCUB6oc6zSIVW | 37,648 | Adding Paged Attention to Qwen1.5-MoE-A2.7B-Chat models using PyTorch XLA and Pallas | {
"login": "ranwangmath1988",
"id": 208410960,
"node_id": "U_kgDODGwZUA",
"avatar_url": "https://avatars.githubusercontent.com/u/208410960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranwangmath1988",
"html_url": "https://github.com/ranwangmath1988",
"followers_url": "https://api.github.com/users/ranwangmath1988/followers",
"following_url": "https://api.github.com/users/ranwangmath1988/following{/other_user}",
"gists_url": "https://api.github.com/users/ranwangmath1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranwangmath1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranwangmath1988/subscriptions",
"organizations_url": "https://api.github.com/users/ranwangmath1988/orgs",
"repos_url": "https://api.github.com/users/ranwangmath1988/repos",
"events_url": "https://api.github.com/users/ranwangmath1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranwangmath1988/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-21T08:15:45 | 2025-04-21T08:16:18 | null | NONE | null | null | null | null | ### Feature request
[Qwen1.5-MoE-A2.7B-Chat](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat)(https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B-Chat) is one of the models that is closest to DeepSeek with improvement. In the current state, FlashAttention is only implemented for GPUs. On the other hand, for TPU training, there has already been written [Paged Attention](https://pytorch.org/xla/master/features/pallas.html) for PyTorch.
### Motivation
In general, TPU v5e and v6e have demonstrated significant cost efficiency for training large models. In addition, as language models become large, the [vLLM](https://github.com/vllm-project/vllm) library has been continuously developed to tackle low-level computation. Moreover, [Pallas](https://docs.jax.dev/en/latest/pallas/index.html) has been actively developed to allow better integration of Jax/Flax into the PyTorch-XLA ecosystem. Despite numerous efforts to perform inference, the possibility of training larger language models, combined with custom training methods such as LoRA, enables the use of TPU to train huge models.
### Your contribution
I want to offer testing using Google Cloud TPU, including fine-tuning and other tasks. Unfortunately, currently, I do not know how to
1. Use PagedAttention to replace regular multihead attention written in `nn.module`.
2. How to load the pre-trained weights into the Paged Attention module. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37648/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37647/comments | https://api.github.com/repos/huggingface/transformers/issues/37647/events | https://github.com/huggingface/transformers/pull/37647 | 3,007,563,466 | PR_kwDOCUB6oc6TQOH3 | 37,647 | added mllama doc | {
"login": "Nikil-D-Gr8",
"id": 74810781,
"node_id": "MDQ6VXNlcjc0ODEwNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/74810781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nikil-D-Gr8",
"html_url": "https://github.com/Nikil-D-Gr8",
"followers_url": "https://api.github.com/users/Nikil-D-Gr8/followers",
"following_url": "https://api.github.com/users/Nikil-D-Gr8/following{/other_user}",
"gists_url": "https://api.github.com/users/Nikil-D-Gr8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nikil-D-Gr8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nikil-D-Gr8/subscriptions",
"organizations_url": "https://api.github.com/users/Nikil-D-Gr8/orgs",
"repos_url": "https://api.github.com/users/Nikil-D-Gr8/repos",
"events_url": "https://api.github.com/users/Nikil-D-Gr8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nikil-D-Gr8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T04:59:15 | 2025-05-10T04:54:35 | 2025-05-10T04:54:35 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37647",
"html_url": "https://github.com/huggingface/transformers/pull/37647",
"diff_url": "https://github.com/huggingface/transformers/pull/37647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37647.patch",
"merged_at": null
} | What does this PR do?
As suggested in this issue[ #issue-2947704577](https://github.com/huggingface/transformers/issues/36979#issuecomment-2817025078) - this PR updates the documentation of the CLIP model, which will now be aligned with the standardized format for all the docs.
Worked on mllama, Used AI , so please let me know even if u need a complete rewrite.
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Please let me know if there are any changes to be done, do share references if any for those changes
Documentation: @stevhliu
-->
| {
"login": "Nikil-D-Gr8",
"id": 74810781,
"node_id": "MDQ6VXNlcjc0ODEwNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/74810781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nikil-D-Gr8",
"html_url": "https://github.com/Nikil-D-Gr8",
"followers_url": "https://api.github.com/users/Nikil-D-Gr8/followers",
"following_url": "https://api.github.com/users/Nikil-D-Gr8/following{/other_user}",
"gists_url": "https://api.github.com/users/Nikil-D-Gr8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nikil-D-Gr8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nikil-D-Gr8/subscriptions",
"organizations_url": "https://api.github.com/users/Nikil-D-Gr8/orgs",
"repos_url": "https://api.github.com/users/Nikil-D-Gr8/repos",
"events_url": "https://api.github.com/users/Nikil-D-Gr8/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nikil-D-Gr8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37647/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37646/comments | https://api.github.com/repos/huggingface/transformers/issues/37646/events | https://github.com/huggingface/transformers/issues/37646 | 3,007,496,186 | I_kwDOCUB6oc6zQr_6 | 37,646 | "pipeline" is not exported from module "transformers" | {
"login": "tekumara",
"id": 125105,
"node_id": "MDQ6VXNlcjEyNTEwNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/125105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tekumara",
"html_url": "https://github.com/tekumara",
"followers_url": "https://api.github.com/users/tekumara/followers",
"following_url": "https://api.github.com/users/tekumara/following{/other_user}",
"gists_url": "https://api.github.com/users/tekumara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tekumara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tekumara/subscriptions",
"organizations_url": "https://api.github.com/users/tekumara/orgs",
"repos_url": "https://api.github.com/users/tekumara/repos",
"events_url": "https://api.github.com/users/tekumara/events{/privacy}",
"received_events_url": "https://api.github.com/users/tekumara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-21T03:55:11 | 2025-10-12T08:04:17 | 2025-10-12T08:04:17 | NONE | null | null | null | null | ### System Info
In vscode/pyright:
```python
from transformers import pipeline
```
Reports the type error:
```
"pipeline" is not exported from module "transformers"
Import from "transformers.pipelines" instead Pylance[reportPrivateImportUsage]
```
### Who can help?
@Rocketknight1
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
```
### Expected behavior
No type error. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37646/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37646/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37645/comments | https://api.github.com/repos/huggingface/transformers/issues/37645/events | https://github.com/huggingface/transformers/pull/37645 | 3,007,460,836 | PR_kwDOCUB6oc6TP4oX | 37,645 | enable 4 test_trainer cases on XPU | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T03:17:43 | 2025-04-23T22:40:11 | 2025-04-23T19:29:42 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37645",
"html_url": "https://github.com/huggingface/transformers/pull/37645",
"diff_url": "https://github.com/huggingface/transformers/pull/37645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37645.patch",
"merged_at": "2025-04-23T19:29:42"
} | below 4 cases passed
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_galore_lr_display_with_scheduler
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_galore_lr_display_without_scheduler
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_schedulefree_radam
tests/trainer/test_trainer.py::TrainerIntegrationTest::test_use_liger_kernel_trainer
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37645/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37644/comments | https://api.github.com/repos/huggingface/transformers/issues/37644/events | https://github.com/huggingface/transformers/pull/37644 | 3,007,379,791 | PR_kwDOCUB6oc6TPn64 | 37,644 | enable mllama cases on xpu | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-21T01:48:43 | 2025-04-22T22:54:22 | 2025-04-22T15:39:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37644",
"html_url": "https://github.com/huggingface/transformers/pull/37644",
"diff_url": "https://github.com/huggingface/transformers/pull/37644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37644.patch",
"merged_at": "2025-04-22T15:39:10"
} | 5 cases pass on both XPU and A100.
tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_batched_generate
tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_forward
tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_generate
tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_generate_text_only
tests/models/mllama/test_modeling_mllama.py::MllamaForConditionalGenerationIntegrationTest::test_11b_model_integration_multi_image_generate
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37644/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37643/comments | https://api.github.com/repos/huggingface/transformers/issues/37643/events | https://github.com/huggingface/transformers/pull/37643 | 3,007,323,307 | PR_kwDOCUB6oc6TPcWK | 37,643 | Add support for manually setting `head_dim` in Qwen2 MoE | {
"login": "LuHC409",
"id": 113956490,
"node_id": "U_kgDOBsrWig",
"avatar_url": "https://avatars.githubusercontent.com/u/113956490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuHC409",
"html_url": "https://github.com/LuHC409",
"followers_url": "https://api.github.com/users/LuHC409/followers",
"following_url": "https://api.github.com/users/LuHC409/following{/other_user}",
"gists_url": "https://api.github.com/users/LuHC409/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LuHC409/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuHC409/subscriptions",
"organizations_url": "https://api.github.com/users/LuHC409/orgs",
"repos_url": "https://api.github.com/users/LuHC409/repos",
"events_url": "https://api.github.com/users/LuHC409/events{/privacy}",
"received_events_url": "https://api.github.com/users/LuHC409/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-21T00:37:47 | 2025-05-09T08:07:38 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37643",
"html_url": "https://github.com/huggingface/transformers/pull/37643",
"diff_url": "https://github.com/huggingface/transformers/pull/37643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37643.patch",
"merged_at": null
} | # Add support for manually setting `head_dim` in Qwen2 MoE
## Problem Description
Currently, in the Qwen2 MoE model, the `head_dim` is strictly set to `hidden_size // num_attention_heads`. This PR adds support for manually setting the `head_dim`, similar to the implementation in the Llama, Mistral, and Mixtral models.
## Changes
- Added a `head_dim` parameter to `Qwen2MoeConfig`
- Modified the `Qwen2MoeAttention` class to support manually defined `head_dim`
- Added corresponding test cases to verify the functionality
## Tests
The following test cases have been added:
- `test_head_dim_manual_setting`: Tests the functionality of manually setting `head_dim`
- `test_head_dim_auto_calculation`: Tests the default automatic calculation of `head_dim`
- `test_head_dim_validation`: Tests validation logic for `head_dim`
## Related Issue
Fixes #36659
## Checklist
- [x] This PR fixes a functional issue
- [x] I have read the contribution guidelines
- [x] The feature was discussed and approved in the issue
- [x] Documentation has been updated accordingly
- [x] Necessary test cases have been added
## Reviewers
@Rocketknight1 @ArthurZucker | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37643/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37642/comments | https://api.github.com/repos/huggingface/transformers/issues/37642/events | https://github.com/huggingface/transformers/pull/37642 | 3,007,295,428 | PR_kwDOCUB6oc6TPWyN | 37,642 | Add time-based evaluation strategy to Trainer | {
"login": "ailunc",
"id": 131329865,
"node_id": "U_kgDOB9PvSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131329865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ailunc",
"html_url": "https://github.com/ailunc",
"followers_url": "https://api.github.com/users/ailunc/followers",
"following_url": "https://api.github.com/users/ailunc/following{/other_user}",
"gists_url": "https://api.github.com/users/ailunc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ailunc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ailunc/subscriptions",
"organizations_url": "https://api.github.com/users/ailunc/orgs",
"repos_url": "https://api.github.com/users/ailunc/repos",
"events_url": "https://api.github.com/users/ailunc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ailunc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-20T23:47:58 | 2025-05-09T13:23:00 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37642",
"html_url": "https://github.com/huggingface/transformers/pull/37642",
"diff_url": "https://github.com/huggingface/transformers/pull/37642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37642.patch",
"merged_at": null
} | # What does this PR do?
**Add time-based evaluation strategy to Trainer**
This PR introduces a time-based evaluation, saving, and logging strategy to the Hugging Face `Trainer`. Previously, the Trainer only supported step-based intervals, which can be inconvenient when running on different hardware setups (e.g., slower local CPU vs fast GPU cluster). Step-based intervals don’t translate well across environments with different speeds, making time-based configuration more practical and flexible.
This change enables configurations like:
```python
training_args = TrainingArguments(
output_dir="./test_output",
eval_strategy=IntervalStrategy.TIME, # Evaluate every N minutes
eval_minutes=1,
save_strategy=IntervalStrategy.TIME, # Save every N minutes
save_minutes=1,
logging_strategy=IntervalStrategy.TIME, # Log every N minutes
logging_minutes=1,
per_device_train_batch_size=8,
num_train_epochs=200,
)
```
This is especially helpful for:
- Avoiding lost progress during long training jobs by saving periodically
- Keeping evaluation frequency consistent across devices and batch sizes
- Monitoring model progress during training in time-based intervals
Fixes #36310
**Dependencies**: None
---
## Before submitting
- [x] This PR improves Trainer functionality (not just docs or typos)
- [x] I have read the [contributor guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)
- [x] This feature was discussed in [issue #36310](https://github.com/huggingface/transformers/issues/36310)
- [x] I have updated/added documentation where necessary
- [x] I have written new tests to cover this functionality
---
## Tests
New tests have been added in `tests/test_time_based_strategy.py` to verify the core functionalities of the time-based strategies:
1. `test_eval_time_based`:
- Validates that evaluation is triggered correctly using `eval_strategy=IntervalStrategy.TIME` and `eval_minutes=1`.
2. `test_save_time_based`:
- Verifies that model checkpoints are saved based on time intervals using `save_strategy=IntervalStrategy.TIME` and `save_minutes=1`.
3. `test_logging_time_based`:
- Checks that logs are written at time-based intervals using `logging_strategy=IntervalStrategy.TIME` and `logging_minutes=1`.
All tests use `distilbert-base-uncased` on the SST-2 dataset and run for a large number of epochs (`num_train_epochs=200`) to ensure that evaluation, saving, and logging occur multiple times. These tests confirm the correct integration and behavior of the time-based strategy during training.
---
## Contributors
- @zhuallen @Zephyr271828 @LuHC409 @Harry-Yang0518
---
## Who can review?
- @zach-huggingface
- @SunMarc
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37642/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37642/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37641/comments | https://api.github.com/repos/huggingface/transformers/issues/37641/events | https://github.com/huggingface/transformers/issues/37641 | 3,007,201,151 | I_kwDOCUB6oc6zPj9_ | 37,641 | Error message is misleading for missing protobuf | {
"login": "Ishan-Kumar2",
"id": 46553104,
"node_id": "MDQ6VXNlcjQ2NTUzMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishan-Kumar2",
"html_url": "https://github.com/Ishan-Kumar2",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions",
"organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs",
"repos_url": "https://api.github.com/users/Ishan-Kumar2/repos",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-20T19:59:54 | 2025-05-29T08:02:57 | 2025-05-29T08:02:57 | NONE | null | null | null | null | ### System Info
transformers==4.51.3
I am trying to use a model using AutoModel.from_pretrained and it kept giving me this error.
OSError: Dream-org/Dream-v0-Instruct-7B does not appear to have files named ('model-00001-of-00004.safetensors', 'model-00002-of-00004.safetensors', 'model-00003-of-00004.safetensors'). Checkout 'https://huggingface.co/Dream-org/Dream-v0-Instruct-7B/tree/main'for available files.
Despite the files being present in the repo.
Turns out this error is because of **protobuf** library not being present. Can we update the error message to reflect this. I would be happy to attempt to fix this!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
model = AutoModel.from_pretrained(
"Dream-org/Dream-v0-Instruct-7B", trust_remote_code=True, torch_dtype=torch.float16, device_map="auto", use_safetensors=None
)
### Expected behavior
Error message should be along the lines of
ImportError:
requires the protobuf library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/protocolbuffers/protobuf/tree/master/python#installation and follow the ones
that match your environment. Please note that you may need to restart your runtime after installation.
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37641/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37640/comments | https://api.github.com/repos/huggingface/transformers/issues/37640/events | https://github.com/huggingface/transformers/pull/37640 | 3,007,172,019 | PR_kwDOCUB6oc6TO-8K | 37,640 | Fix incorrect installation instructions (for issue #37476) | {
"login": "Zephyr271828",
"id": 109715540,
"node_id": "U_kgDOBoogVA",
"avatar_url": "https://avatars.githubusercontent.com/u/109715540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zephyr271828",
"html_url": "https://github.com/Zephyr271828",
"followers_url": "https://api.github.com/users/Zephyr271828/followers",
"following_url": "https://api.github.com/users/Zephyr271828/following{/other_user}",
"gists_url": "https://api.github.com/users/Zephyr271828/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zephyr271828/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zephyr271828/subscriptions",
"organizations_url": "https://api.github.com/users/Zephyr271828/orgs",
"repos_url": "https://api.github.com/users/Zephyr271828/repos",
"events_url": "https://api.github.com/users/Zephyr271828/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zephyr271828/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-04-20T18:54:01 | 2025-05-08T15:32:59 | 2025-05-08T15:32:58 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37640",
"html_url": "https://github.com/huggingface/transformers/pull/37640",
"diff_url": "https://github.com/huggingface/transformers/pull/37640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37640.patch",
"merged_at": "2025-05-08T15:32:58"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR is a proposed solution for issue #37476. Specifically, this PR modifies description in README.md, adds a requirements.txt file and an install.sh file:
- README.md: change python version requirement from 3.9+ to 3.9-3.12; add feasible installation commands. These changes will also fix the [transformers pypi page](https://pypi.org/project/transformers/).
```py
# venv
python -m venv .my-env
source .my-env/bin/activate
pip install "jax>=0.4.1,<=0.4.13"
pip install "optax>=0.0.8,<=0.1.4"
pip install "orbax-checkpoint==0.2.3"
pip install "torch>=2.1"
pip install "tensorflow>2.9,<2.16"
pip install "flax>=0.4.1,<=0.7.0"
pip install "accelerate>=0.26.0"
```
- requirements.txt: listing the core requirements of the package as specified in [setup.py](https://github.com/huggingface/transformers/blob/6daa3eeba582facb57cd71db8efb66998b12942f/setup.py#L436). Note more variants of requirements.txt should be added, but I'm not sure how to do it. I would be grateful if you could give me some suggestions!
- install.sh: a script that verifies python version, creates the virtual env, installs the core packages, and validates the installation with the [quickstart example](https://github.com/huggingface/transformers?tab=readme-ov-file#quickstart). The script closely follows the version requiremetns in setup.py, and has been tested and worked on my machine. If you encounter any bugs, please tell me so that I can fix accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37640/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37639/comments | https://api.github.com/repos/huggingface/transformers/issues/37639/events | https://github.com/huggingface/transformers/issues/37639 | 3,007,148,300 | I_kwDOCUB6oc6zPXEM | 37,639 | TrOCR (image-to-text) produces incorrect output (':') on 12th Gen Intel CPU (i7-1260P) even with simple input | {
"login": "eximius313",
"id": 5578017,
"node_id": "MDQ6VXNlcjU1NzgwMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5578017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eximius313",
"html_url": "https://github.com/eximius313",
"followers_url": "https://api.github.com/users/eximius313/followers",
"following_url": "https://api.github.com/users/eximius313/following{/other_user}",
"gists_url": "https://api.github.com/users/eximius313/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eximius313/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eximius313/subscriptions",
"organizations_url": "https://api.github.com/users/eximius313/orgs",
"repos_url": "https://api.github.com/users/eximius313/repos",
"events_url": "https://api.github.com/users/eximius313/events{/privacy}",
"received_events_url": "https://api.github.com/users/eximius313/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-20T18:00:29 | 2025-04-21T15:25:30 | 2025-04-21T15:25:29 | NONE | null | null | null | null | ### System Info
```
- `transformers` version: 4.51.3
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.11.9
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
TrOCR (both pipeline and manual processing with VisionEncoderDecoderModel) consistently outputs ':' on CPU, even for simple English text images and regardless of num_threads.
Basic PyTorch ops, fill-mask (BERT), image-classification (ViT) work correctly on the same CPU and environment.
Code:
```python
import os
import torch
num_threads = 1
print(f"Setting OMP_NUM_THREADS and MKL_NUM_THREADS to: {num_threads}")
os.environ['OMP_NUM_THREADS'] = str(num_threads)
os.environ['MKL_NUM_THREADS'] = str(num_threads)
torch.set_num_threads(num_threads)
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import torch
image_path = "C:/ocr.png"
model_id = 'microsoft/trocr-large-printed'
try:
processor = TrOCRProcessor.from_pretrained(model_id)
except Exception as e:
exit()
try:
model = VisionEncoderDecoderModel.from_pretrained(model_id)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
except Exception as e:
exit()
try:
img = Image.open(image_path).convert("RGB")
except Exception as e:
exit()
try:
pixel_values = processor(images=img, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
except Exception as e:
exit()
try:
generated_ids = model.generate(pixel_values, max_length=50)
except Exception as e:
exit()
try:
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
print("\n--- OUTPUT ---")
print(generated_text)
print("------------------------------------\n")
except Exception as e:
exit()
```
Image:

Steps to reproduce:
`python example.py`
### Expected behavior
It displays "This is some English text" | {
"login": "eximius313",
"id": 5578017,
"node_id": "MDQ6VXNlcjU1NzgwMTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5578017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eximius313",
"html_url": "https://github.com/eximius313",
"followers_url": "https://api.github.com/users/eximius313/followers",
"following_url": "https://api.github.com/users/eximius313/following{/other_user}",
"gists_url": "https://api.github.com/users/eximius313/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eximius313/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eximius313/subscriptions",
"organizations_url": "https://api.github.com/users/eximius313/orgs",
"repos_url": "https://api.github.com/users/eximius313/repos",
"events_url": "https://api.github.com/users/eximius313/events{/privacy}",
"received_events_url": "https://api.github.com/users/eximius313/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37639/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37638/comments | https://api.github.com/repos/huggingface/transformers/issues/37638/events | https://github.com/huggingface/transformers/pull/37638 | 3,007,081,956 | PR_kwDOCUB6oc6TOtkE | 37,638 | [WIP] Support modernBERT for encoder-decoder models | {
"login": "yijun-lee",
"id": 119404328,
"node_id": "U_kgDOBx33KA",
"avatar_url": "https://avatars.githubusercontent.com/u/119404328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yijun-lee",
"html_url": "https://github.com/yijun-lee",
"followers_url": "https://api.github.com/users/yijun-lee/followers",
"following_url": "https://api.github.com/users/yijun-lee/following{/other_user}",
"gists_url": "https://api.github.com/users/yijun-lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yijun-lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yijun-lee/subscriptions",
"organizations_url": "https://api.github.com/users/yijun-lee/orgs",
"repos_url": "https://api.github.com/users/yijun-lee/repos",
"events_url": "https://api.github.com/users/yijun-lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/yijun-lee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-20T15:32:29 | 2025-05-12T02:37:02 | 2025-04-24T11:03:04 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37638",
"html_url": "https://github.com/huggingface/transformers/pull/37638",
"diff_url": "https://github.com/huggingface/transformers/pull/37638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37638.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #35385
This is the first draft — I’d appreciate any feedback!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "yijun-lee",
"id": 119404328,
"node_id": "U_kgDOBx33KA",
"avatar_url": "https://avatars.githubusercontent.com/u/119404328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yijun-lee",
"html_url": "https://github.com/yijun-lee",
"followers_url": "https://api.github.com/users/yijun-lee/followers",
"following_url": "https://api.github.com/users/yijun-lee/following{/other_user}",
"gists_url": "https://api.github.com/users/yijun-lee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yijun-lee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yijun-lee/subscriptions",
"organizations_url": "https://api.github.com/users/yijun-lee/orgs",
"repos_url": "https://api.github.com/users/yijun-lee/repos",
"events_url": "https://api.github.com/users/yijun-lee/events{/privacy}",
"received_events_url": "https://api.github.com/users/yijun-lee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37638/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37637/comments | https://api.github.com/repos/huggingface/transformers/issues/37637/events | https://github.com/huggingface/transformers/issues/37637 | 3,006,907,057 | I_kwDOCUB6oc6zOcKx | 37,637 | Processor multiprocessing error when load custom processor | {
"login": "Kuangdd01",
"id": 82590017,
"node_id": "MDQ6VXNlcjgyNTkwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/82590017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kuangdd01",
"html_url": "https://github.com/Kuangdd01",
"followers_url": "https://api.github.com/users/Kuangdd01/followers",
"following_url": "https://api.github.com/users/Kuangdd01/following{/other_user}",
"gists_url": "https://api.github.com/users/Kuangdd01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kuangdd01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kuangdd01/subscriptions",
"organizations_url": "https://api.github.com/users/Kuangdd01/orgs",
"repos_url": "https://api.github.com/users/Kuangdd01/repos",
"events_url": "https://api.github.com/users/Kuangdd01/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kuangdd01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-20T09:13:54 | 2025-06-12T08:02:55 | 2025-06-12T08:02:55 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.51.1
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: 0.16.5
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800 80GB PCIe
### Who can help?
@zucchini-nlp @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I made a minimized sample to replicate the problem.
```python
from datasets import Dataset
from transformers import AutoProcessor
from PIL import Image
from io import BytesIO
import requests
# fake dataset
ds = [
("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG", "What animal is on the candy?"),
("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG", "What animal is on the candy?"),
("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG", "What animal is on the candy?"),
("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG", "What animal is on the candy?"),
]
paths, texts = zip(*ds)
dataset = Dataset.from_dict({
"image_path": list(paths),
"text": list(texts)
})
# load kimi_vl processor
# fails here :[
processor = AutoProcessor.from_pretrained("moonshotai/Kimi-VL-A3B-Thinking", trust_remote_code=True)
# works nicely for official implementation
# processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf", trust_remote_code=True)
class Test:
def __init__(self, processor):
self.processor = processor
def preprocess(self, example):
path = example["image_path"]
if path.startswith("http://") or path.startswith("https://"):
resp = requests.get(path, timeout=10)
resp.raise_for_status()
image = Image.open(BytesIO(resp.content)).convert("RGB")
else:
image = Image.open(path).convert("RGB")
outputs = self.processor(images=image, text=example["text"], return_tensors="pt")
return {"out": outputs}
t = Test(processor)
processed = dataset.map(
t.preprocess,
remove_columns=["image_path", "text"],
num_proc=4, # kimi processor fails when num_proc > 1
batched=False,
)
print(processed[0]) # show case
```
If we use a custom processor like `KimiVLProcessor`, the following error will be raised:
```
RuntimeError Traceback (most recent call last)
[<ipython-input-12-791f4b3f5e14>](https://localhost:8080/#) in <cell line: 0>()
41
42 t = Test(processor)
---> 43 processed = dataset.map(
44 t.preprocess,
45 remove_columns=["image_path", "text"],
2 frames
[/usr/local/lib/python3.11/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in iflatmap_unordered(pool, func, kwargs_iterable)
711 pool_changed = True
712 # One of the subprocesses has died. We should not wait forever.
--> 713 raise RuntimeError(
714 "One of the subprocesses has abruptly died during map operation."
715 "To debug the error, disable multiprocessing."
RuntimeError: One of the subprocesses has abruptly died during map operation.To debug the error, disable multiprocessing.
```
```
Process ForkPoolWorker-12:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/python3.11/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.11/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
^^^^^
File "/usr/local/lib/python3.11/dist-packages/multiprocess/queues.py", line 370, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/dill/_dill.py", line 303, in loads
return load(file, ignore, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/dill/_dill.py", line 289, in load
return Unpickler(file, ignore=ignore, **kwds).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/dill/_dill.py", line 444, in load
obj = StockUnpickler.load(self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/dill/_dill.py", line 593, in _create_type
return typeobj(*args)
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/typing.py", line 2992, in __new__
raise TypeError('cannot inherit from both a TypedDict type '
TypeError: cannot inherit from both a TypedDict type and a non-TypedDict base class
```
After I traced back the `typing.py`, I found it was caused by this arg: `KimiVLProcessorKwargs`, which failed in the typing check.
I compared `processing_llava.py` and `processing_kimivl.py` but didn't find the main difference. 😢
Would anyone be able to help me with this? If I've missed something, please point it out to me. 😃
Related Issue https://github.com/hiyouga/LLaMA-Factory/issues/7763
I also want to know how we can register/insert our custom processor in the pipeline.
**Any help is greatly appreciated!**
### Expected behavior
The processor works nicely in a multiprocess pipeline. 😄 | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37637/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37636/comments | https://api.github.com/repos/huggingface/transformers/issues/37636/events | https://github.com/huggingface/transformers/pull/37636 | 3,006,851,627 | PR_kwDOCUB6oc6TOAcL | 37,636 | Add counters for dataset classes | {
"login": "jiangyukunok",
"id": 7341895,
"node_id": "MDQ6VXNlcjczNDE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7341895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangyukunok",
"html_url": "https://github.com/jiangyukunok",
"followers_url": "https://api.github.com/users/jiangyukunok/followers",
"following_url": "https://api.github.com/users/jiangyukunok/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangyukunok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangyukunok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangyukunok/subscriptions",
"organizations_url": "https://api.github.com/users/jiangyukunok/orgs",
"repos_url": "https://api.github.com/users/jiangyukunok/repos",
"events_url": "https://api.github.com/users/jiangyukunok/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangyukunok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-20T07:00:18 | 2025-04-26T02:08:13 | 2025-04-22T16:30:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37636",
"html_url": "https://github.com/huggingface/transformers/pull/37636",
"diff_url": "https://github.com/huggingface/transformers/pull/37636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37636.patch",
"merged_at": "2025-04-22T16:30:43"
} | # What does this PR do?
This improves the logging in **examples/pytorch/text-classification/run_glue.py** by adding counts for different labels in train/validation/test datasets. The logs could be useful when working with classification tasks (spot class imbalance early, check data split consistency, etc).
test result:
04/20/2025 03:52:36 - INFO - __main__ - Class distribution in train set:
04/20/2025 03:52:36 - INFO - __main__ - Label 1: 2474 (67.45%)
04/20/2025 03:52:36 - INFO - __main__ - Label 0: 1194 (32.55%)
04/20/2025 03:52:36 - INFO - __main__ - Class distribution in validation set:
04/20/2025 03:52:36 - INFO - __main__ - Label 1: 279 (68.38%)
04/20/2025 03:52:36 - INFO - __main__ - Label 0: 129 (31.62%)
04/20/2025 03:52:36 - INFO - __main__ - Class distribution in test set:
04/20/2025 03:52:36 - INFO - __main__ - Label 1: 1147 (66.49%)
04/20/2025 03:52:36 - INFO - __main__ - Label 0: 578 (33.51%)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37636/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37635/comments | https://api.github.com/repos/huggingface/transformers/issues/37635/events | https://github.com/huggingface/transformers/pull/37635 | 3,006,842,112 | PR_kwDOCUB6oc6TN-jk | 37,635 | Add resume checkpoint support to ClearML callback | {
"login": "ailunc",
"id": 131329865,
"node_id": "U_kgDOB9PvSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131329865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ailunc",
"html_url": "https://github.com/ailunc",
"followers_url": "https://api.github.com/users/ailunc/followers",
"following_url": "https://api.github.com/users/ailunc/following{/other_user}",
"gists_url": "https://api.github.com/users/ailunc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ailunc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ailunc/subscriptions",
"organizations_url": "https://api.github.com/users/ailunc/orgs",
"repos_url": "https://api.github.com/users/ailunc/repos",
"events_url": "https://api.github.com/users/ailunc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ailunc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-20T06:33:59 | 2025-04-22T16:40:17 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37635",
"html_url": "https://github.com/huggingface/transformers/pull/37635",
"diff_url": "https://github.com/huggingface/transformers/pull/37635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37635.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #37502 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.https://github.com/huggingface/transformers/issues/37502
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37635/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37634/comments | https://api.github.com/repos/huggingface/transformers/issues/37634/events | https://github.com/huggingface/transformers/pull/37634 | 3,006,830,849 | PR_kwDOCUB6oc6TN8Yv | 37,634 | Add PLM Model | {
"login": "JiwenJ",
"id": 125579488,
"node_id": "U_kgDOB3ww4A",
"avatar_url": "https://avatars.githubusercontent.com/u/125579488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JiwenJ",
"html_url": "https://github.com/JiwenJ",
"followers_url": "https://api.github.com/users/JiwenJ/followers",
"following_url": "https://api.github.com/users/JiwenJ/following{/other_user}",
"gists_url": "https://api.github.com/users/JiwenJ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JiwenJ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiwenJ/subscriptions",
"organizations_url": "https://api.github.com/users/JiwenJ/orgs",
"repos_url": "https://api.github.com/users/JiwenJ/repos",
"events_url": "https://api.github.com/users/JiwenJ/events{/privacy}",
"received_events_url": "https://api.github.com/users/JiwenJ/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-04-20T06:03:02 | 2025-08-25T12:41:43 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37634",
"html_url": "https://github.com/huggingface/transformers/pull/37634",
"diff_url": "https://github.com/huggingface/transformers/pull/37634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37634.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In March 2025, the PLM Team introduce the __PLM__, a **P**eripheral **L**anguage **M**odel, developed through a co-design process that jointly optimizes model architecture and edge system constraints. The PLM utilizes a Multi-head Latent Attention mechanism and employs the squared ReLU activation function to encourage sparsity, thereby reducing peak memory footprint during inference. We hope the PLM series will make contributions to the open-source community. We will continue this effort by releasing more efficient models in the coming months.
This PR aims to integrate PLM into the transformer library, making it more accessible and user-friendly for the open-source community.
## TODO
- [ ] We will provide more test case of PLM.
### Usefull Link
- [Arxiv Link] [PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing](https://arxiv.org/html/2503.12167v1)
- [Huggingface] https://huggingface.co/PLM-Team
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@qubvel @ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37634/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37633/comments | https://api.github.com/repos/huggingface/transformers/issues/37633/events | https://github.com/huggingface/transformers/pull/37633 | 3,006,819,372 | PR_kwDOCUB6oc6TN6CR | 37,633 | [fix gemma] Set default value for output_attentions parameter in Gemma2 and Gemma… | {
"login": "chenin-wang",
"id": 88609203,
"node_id": "MDQ6VXNlcjg4NjA5MjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/88609203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenin-wang",
"html_url": "https://github.com/chenin-wang",
"followers_url": "https://api.github.com/users/chenin-wang/followers",
"following_url": "https://api.github.com/users/chenin-wang/following{/other_user}",
"gists_url": "https://api.github.com/users/chenin-wang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenin-wang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenin-wang/subscriptions",
"organizations_url": "https://api.github.com/users/chenin-wang/orgs",
"repos_url": "https://api.github.com/users/chenin-wang/repos",
"events_url": "https://api.github.com/users/chenin-wang/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenin-wang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-20T05:35:09 | 2025-04-22T09:18:17 | 2025-04-22T09:18:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37633",
"html_url": "https://github.com/huggingface/transformers/pull/37633",
"diff_url": "https://github.com/huggingface/transformers/pull/37633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37633.patch",
"merged_at": "2025-04-22T09:18:17"
} |
# What does this PR do?
Set default value for output_attentions parameter in Gemma2 and Gemma3
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @zucchini-nlp | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37633/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37632/comments | https://api.github.com/repos/huggingface/transformers/issues/37632/events | https://github.com/huggingface/transformers/issues/37632 | 3,006,734,100 | I_kwDOCUB6oc6zNx8U | 37,632 | bitnet | {
"login": "werruww",
"id": 157249411,
"node_id": "U_kgDOCV9vgw",
"avatar_url": "https://avatars.githubusercontent.com/u/157249411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/werruww",
"html_url": "https://github.com/werruww",
"followers_url": "https://api.github.com/users/werruww/followers",
"following_url": "https://api.github.com/users/werruww/following{/other_user}",
"gists_url": "https://api.github.com/users/werruww/gists{/gist_id}",
"starred_url": "https://api.github.com/users/werruww/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/werruww/subscriptions",
"organizations_url": "https://api.github.com/users/werruww/orgs",
"repos_url": "https://api.github.com/users/werruww/repos",
"events_url": "https://api.github.com/users/werruww/events{/privacy}",
"received_events_url": "https://api.github.com/users/werruww/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-20T01:00:35 | 2025-05-29T08:03:00 | 2025-05-29T08:03:00 | NONE | null | null | null | null | ### System Info
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T/discussions/16
### Who can help?
?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
?
### Expected behavior
? | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37632/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37631 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37631/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37631/comments | https://api.github.com/repos/huggingface/transformers/issues/37631/events | https://github.com/huggingface/transformers/pull/37631 | 3,006,502,948 | PR_kwDOCUB6oc6TM999 | 37,631 | Fix Qwen2.5-Omni get_chunked_index chunking functionality | {
"login": "imkero",
"id": 32296555,
"node_id": "MDQ6VXNlcjMyMjk2NTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/32296555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imkero",
"html_url": "https://github.com/imkero",
"followers_url": "https://api.github.com/users/imkero/followers",
"following_url": "https://api.github.com/users/imkero/following{/other_user}",
"gists_url": "https://api.github.com/users/imkero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imkero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imkero/subscriptions",
"organizations_url": "https://api.github.com/users/imkero/orgs",
"repos_url": "https://api.github.com/users/imkero/repos",
"events_url": "https://api.github.com/users/imkero/events{/privacy}",
"received_events_url": "https://api.github.com/users/imkero/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T15:36:29 | 2025-04-22T09:15:37 | 2025-04-22T09:15:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37631",
"html_url": "https://github.com/huggingface/transformers/pull/37631",
"diff_url": "https://github.com/huggingface/transformers/pull/37631.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37631.patch",
"merged_at": "2025-04-22T09:15:37"
} | # What does this PR do?
This PR fixes the incorrect mrope position chunking function `get_chunked_index` in `modular_qwen2_5_omni`.
> The shape of `token_indices` for `modular_qwen2_5_omni.get_chunked_index` is `(3, seq_len)` in main branch
It takes a constant value `3` (comes from `len(token_indices)`) in the loop condition to iterate the `token_indices` input (instead of the correct value `token_indices.shape[1]`). This will make it always produce only a single chunk. (Line 1160 vs Line 1166)
https://github.com/huggingface/transformers/blob/27a25bee4fcb865e8799ba026f1ea4455f2cca98/src/transformers/models/qwen2_5_omni/modular_qwen2_5_omni.py#L1157-L1168
Another similar but correct impl lies in `processing_qwen2_5_omni` and we can take it as a reference. (Line 303 vs Line 309)
https://github.com/huggingface/transformers/blob/27a25bee4fcb865e8799ba026f1ea4455f2cca98/src/transformers/models/qwen2_5_omni/processing_qwen2_5_omni.py#L300-L311
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@BakerBunker can you take a look about this bugfix please?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37631/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/37631/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37630 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37630/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37630/comments | https://api.github.com/repos/huggingface/transformers/issues/37630/events | https://github.com/huggingface/transformers/pull/37630 | 3,006,451,340 | PR_kwDOCUB6oc6TMzjK | 37,630 | Fix Gemma3ForCausalLM base_model_prefix | {
"login": "calpt",
"id": 36051308,
"node_id": "MDQ6VXNlcjM2MDUxMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calpt",
"html_url": "https://github.com/calpt",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.github.com/users/calpt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calpt/subscriptions",
"organizations_url": "https://api.github.com/users/calpt/orgs",
"repos_url": "https://api.github.com/users/calpt/repos",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"received_events_url": "https://api.github.com/users/calpt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T14:02:39 | 2025-04-21T19:03:54 | 2025-04-21T19:03:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37630",
"html_url": "https://github.com/huggingface/transformers/pull/37630",
"diff_url": "https://github.com/huggingface/transformers/pull/37630.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37630.patch",
"merged_at": null
} | # What does this PR do?
The base model attribute for Gemma3ForCausalLM is "model", this updates the base_model_prefix accordingly so `model.base_model` correctly returns the Gemma3TextModel instance.
| {
"login": "calpt",
"id": 36051308,
"node_id": "MDQ6VXNlcjM2MDUxMzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calpt",
"html_url": "https://github.com/calpt",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.github.com/users/calpt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/calpt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/calpt/subscriptions",
"organizations_url": "https://api.github.com/users/calpt/orgs",
"repos_url": "https://api.github.com/users/calpt/repos",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"received_events_url": "https://api.github.com/users/calpt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37630/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37629 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37629/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37629/comments | https://api.github.com/repos/huggingface/transformers/issues/37629/events | https://github.com/huggingface/transformers/pull/37629 | 3,006,434,672 | PR_kwDOCUB6oc6TMwQW | 37,629 | [tests] Stricter generate + compilation test -- no recompilations allowed | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T13:24:40 | 2025-04-30T14:48:35 | 2025-04-22T10:12:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37629",
"html_url": "https://github.com/huggingface/transformers/pull/37629",
"diff_url": "https://github.com/huggingface/transformers/pull/37629.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37629.patch",
"merged_at": "2025-04-22T10:12:18"
} | # What does this PR do?
Follow-up to #37447
This PR upgrades `test_generate_compile_model_forward` to catch recompilation issues. This is done by a) activating recompilation logs b) catching recompilation messages in the logs. The improved test would have failed with the changes that broke gemma 2/3 + compile 🤗
In the process, a few extra things were standardized in the test, and a few skips were removed. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37629/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37629/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37628 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37628/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37628/comments | https://api.github.com/repos/huggingface/transformers/issues/37628/events | https://github.com/huggingface/transformers/pull/37628 | 3,006,353,090 | PR_kwDOCUB6oc6TMgNP | 37,628 | docs(swin): Update Swin model card to standard format | {
"login": "BryanBradfo",
"id": 101939095,
"node_id": "U_kgDOBhN3lw",
"avatar_url": "https://avatars.githubusercontent.com/u/101939095?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BryanBradfo",
"html_url": "https://github.com/BryanBradfo",
"followers_url": "https://api.github.com/users/BryanBradfo/followers",
"following_url": "https://api.github.com/users/BryanBradfo/following{/other_user}",
"gists_url": "https://api.github.com/users/BryanBradfo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BryanBradfo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BryanBradfo/subscriptions",
"organizations_url": "https://api.github.com/users/BryanBradfo/orgs",
"repos_url": "https://api.github.com/users/BryanBradfo/repos",
"events_url": "https://api.github.com/users/BryanBradfo/events{/privacy}",
"received_events_url": "https://api.github.com/users/BryanBradfo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T10:27:17 | 2025-05-21T23:16:44 | 2025-05-21T23:16:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37628",
"html_url": "https://github.com/huggingface/transformers/pull/37628",
"diff_url": "https://github.com/huggingface/transformers/pull/37628.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37628.patch",
"merged_at": "2025-05-21T23:16:43"
} | # What does this PR do?
This PR updates the Swin Transformer model card (`swin.md`) to align with the new standardized format.
- Follows the template provided in #36979.
- Includes working Pipeline and AutoModel examples tested locally.
- Removed the `transformers-cli` example due to input handling inconsistencies observed across different CLI versions/setups.
Relates to #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37628/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37628/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37627 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37627/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37627/comments | https://api.github.com/repos/huggingface/transformers/issues/37627/events | https://github.com/huggingface/transformers/issues/37627 | 3,006,334,276 | I_kwDOCUB6oc6zMQVE | 37,627 | if I want to use my image-text data to finetune the SigLIP2, where I can get the train code? | {
"login": "LeopardCatCat",
"id": 109834795,
"node_id": "U_kgDOBovyKw",
"avatar_url": "https://avatars.githubusercontent.com/u/109834795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeopardCatCat",
"html_url": "https://github.com/LeopardCatCat",
"followers_url": "https://api.github.com/users/LeopardCatCat/followers",
"following_url": "https://api.github.com/users/LeopardCatCat/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopardCatCat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeopardCatCat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopardCatCat/subscriptions",
"organizations_url": "https://api.github.com/users/LeopardCatCat/orgs",
"repos_url": "https://api.github.com/users/LeopardCatCat/repos",
"events_url": "https://api.github.com/users/LeopardCatCat/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeopardCatCat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-19T09:50:18 | 2025-04-21T10:42:24 | null | NONE | null | null | null | null | ### Feature request
siglip2 finetune
### Motivation
siglip2 finetune
### Your contribution
no yet | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37627/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37626 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37626/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37626/comments | https://api.github.com/repos/huggingface/transformers/issues/37626/events | https://github.com/huggingface/transformers/issues/37626 | 3,006,274,010 | I_kwDOCUB6oc6zMBna | 37,626 | `check_imports` unnecessarily verifies packages that may not be needed | {
"login": "HichTala",
"id": 98521878,
"node_id": "U_kgDOBd9TFg",
"avatar_url": "https://avatars.githubusercontent.com/u/98521878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HichTala",
"html_url": "https://github.com/HichTala",
"followers_url": "https://api.github.com/users/HichTala/followers",
"following_url": "https://api.github.com/users/HichTala/following{/other_user}",
"gists_url": "https://api.github.com/users/HichTala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HichTala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HichTala/subscriptions",
"organizations_url": "https://api.github.com/users/HichTala/orgs",
"repos_url": "https://api.github.com/users/HichTala/repos",
"events_url": "https://api.github.com/users/HichTala/events{/privacy}",
"received_events_url": "https://api.github.com/users/HichTala/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-19T07:43:08 | 2025-04-30T08:17:55 | 2025-04-30T08:17:55 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.51.3
- Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
- Python version: 3.12.4
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX 3500 Ada Generation Laptop GPU
### Who can help?
@Rocketknight1
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In an environment without tensorflow run the following
```python
from transformers import AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained(
"HichTala/reproduction_example",
trust_remote_code=True
)
```
Here is the link to the [repo](https://huggingface.co/HichTala/reproduction_example).
### Expected behavior
Here the code crash with the error message
```
ImportError: This modeling file requires the following packages that were not found in your environment: tensorflow. Run `pip install tensorflow`
```
While in the code tensorflow is not necessary since there is a check function already:
```python
if is_tf_available():
import tensorflow as tf
```
In the new way of parsing remote code imports using `AST`, it checks every imports except those from flash_attn.
May be we want to also ignore more library. I propose to ignore the library if the check function exists in `transformers.utils.import_utils`.
If you're ok with that. I'll be happy to submit a PR. | {
"login": "HichTala",
"id": 98521878,
"node_id": "U_kgDOBd9TFg",
"avatar_url": "https://avatars.githubusercontent.com/u/98521878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HichTala",
"html_url": "https://github.com/HichTala",
"followers_url": "https://api.github.com/users/HichTala/followers",
"following_url": "https://api.github.com/users/HichTala/following{/other_user}",
"gists_url": "https://api.github.com/users/HichTala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HichTala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HichTala/subscriptions",
"organizations_url": "https://api.github.com/users/HichTala/orgs",
"repos_url": "https://api.github.com/users/HichTala/repos",
"events_url": "https://api.github.com/users/HichTala/events{/privacy}",
"received_events_url": "https://api.github.com/users/HichTala/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37626/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37625 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37625/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37625/comments | https://api.github.com/repos/huggingface/transformers/issues/37625/events | https://github.com/huggingface/transformers/pull/37625 | 3,006,262,742 | PR_kwDOCUB6oc6TMN9J | 37,625 | Allow Exclusion of Input IDs from RepetitionPenaltyLogitsProcessor | {
"login": "alex-jw-brooks",
"id": 10740300,
"node_id": "MDQ6VXNlcjEwNzQwMzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10740300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-jw-brooks",
"html_url": "https://github.com/alex-jw-brooks",
"followers_url": "https://api.github.com/users/alex-jw-brooks/followers",
"following_url": "https://api.github.com/users/alex-jw-brooks/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-jw-brooks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-jw-brooks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-jw-brooks/subscriptions",
"organizations_url": "https://api.github.com/users/alex-jw-brooks/orgs",
"repos_url": "https://api.github.com/users/alex-jw-brooks/repos",
"events_url": "https://api.github.com/users/alex-jw-brooks/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-jw-brooks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T07:22:04 | 2025-04-21T14:46:05 | 2025-04-21T14:46:05 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37625",
"html_url": "https://github.com/huggingface/transformers/pull/37625",
"diff_url": "https://github.com/huggingface/transformers/pull/37625.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37625.patch",
"merged_at": "2025-04-21T14:46:05"
} | This PR adds a flag for allowing the exclusion of input IDs when using the `RepetitionPenaltyLogitsProcessor` - currently there are some workarounds that may be used for language models, e.g., passing input embeddings instead of input ids, however, these workarounds are a bit trickier with multimodal models, since we generally pass the input ids first to create the merged multimodal embeddings in the model.
Adding such a flag would be really helpful for models like granite speech, which @avihu111 and I had recently added support for. In experiments, the model works very well with a super high repetition penalty on just the newly generated token IDs, but performance degrades severely when the input IDs are included.
By default, input IDs are included to avoid changing existing default behaviors.
Fixes https://github.com/huggingface/transformers/issues/36642
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
Probably @gante @zucchini-nlp @eustlb would have the most context! | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37625/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37625/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37624 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37624/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37624/comments | https://api.github.com/repos/huggingface/transformers/issues/37624/events | https://github.com/huggingface/transformers/pull/37624 | 3,006,171,047 | PR_kwDOCUB6oc6TL7zx | 37,624 | chore: update SigLIP2 model card | {
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-19T04:35:03 | 2025-04-25T19:46:18 | 2025-04-25T19:46:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37624",
"html_url": "https://github.com/huggingface/transformers/pull/37624",
"diff_url": "https://github.com/huggingface/transformers/pull/37624.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37624.patch",
"merged_at": "2025-04-25T19:46:18"
} | # What does this PR do?
Update the model card for SigLIP2 to handle #36979
Fixes #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37624/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37624/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37623 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37623/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37623/comments | https://api.github.com/repos/huggingface/transformers/issues/37623/events | https://github.com/huggingface/transformers/pull/37623 | 3,005,820,967 | PR_kwDOCUB6oc6TKwnJ | 37,623 | Make hybrid cache exportable | {
"login": "tugsbayasgalan",
"id": 16603271,
"node_id": "MDQ6VXNlcjE2NjAzMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/16603271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tugsbayasgalan",
"html_url": "https://github.com/tugsbayasgalan",
"followers_url": "https://api.github.com/users/tugsbayasgalan/followers",
"following_url": "https://api.github.com/users/tugsbayasgalan/following{/other_user}",
"gists_url": "https://api.github.com/users/tugsbayasgalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tugsbayasgalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tugsbayasgalan/subscriptions",
"organizations_url": "https://api.github.com/users/tugsbayasgalan/orgs",
"repos_url": "https://api.github.com/users/tugsbayasgalan/repos",
"events_url": "https://api.github.com/users/tugsbayasgalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/tugsbayasgalan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-18T21:52:20 | 2025-04-29T10:05:03 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37623",
"html_url": "https://github.com/huggingface/transformers/pull/37623",
"diff_url": "https://github.com/huggingface/transformers/pull/37623.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37623.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37623/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37623/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37622 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37622/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37622/comments | https://api.github.com/repos/huggingface/transformers/issues/37622/events | https://github.com/huggingface/transformers/pull/37622 | 3,005,642,009 | PR_kwDOCUB6oc6TKKB8 | 37,622 | Update longformer.md | {
"login": "JihadHammoud02",
"id": 94748033,
"node_id": "U_kgDOBaW9gQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94748033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JihadHammoud02",
"html_url": "https://github.com/JihadHammoud02",
"followers_url": "https://api.github.com/users/JihadHammoud02/followers",
"following_url": "https://api.github.com/users/JihadHammoud02/following{/other_user}",
"gists_url": "https://api.github.com/users/JihadHammoud02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JihadHammoud02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JihadHammoud02/subscriptions",
"organizations_url": "https://api.github.com/users/JihadHammoud02/orgs",
"repos_url": "https://api.github.com/users/JihadHammoud02/repos",
"events_url": "https://api.github.com/users/JihadHammoud02/events{/privacy}",
"received_events_url": "https://api.github.com/users/JihadHammoud02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-18T19:47:12 | 2025-04-21T17:30:51 | 2025-04-21T17:30:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37622",
"html_url": "https://github.com/huggingface/transformers/pull/37622",
"diff_url": "https://github.com/huggingface/transformers/pull/37622.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37622.patch",
"merged_at": "2025-04-21T17:30:51"
} | Refactored Longformer docs
Added examples for pipeline, Automodel and cli
Added quantization
Did not add a Attention visualizer, from what I researched it doesn't support it, if it is not the case I am happy to add it !
Added a note concerning versions < 4.37.0.dev | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37622/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.