url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/40038 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40038/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40038/comments | https://api.github.com/repos/huggingface/transformers/issues/40038/events | https://github.com/huggingface/transformers/pull/40038 | 3,304,431,456 | PR_kwDOCUB6oc6iyXF0 | 40,038 | Fix error on importing unavailable torch.distributed | {
"login": "m-gallus",
"id": 141938080,
"node_id": "U_kgDOCHXNoA",
"avatar_url": "https://avatars.githubusercontent.com/u/141938080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/m-gallus",
"html_url": "https://github.com/m-gallus",
"followers_url": "https://api.github.com/users/m-gallus/followers",
"following_url": "https://api.github.com/users/m-gallus/following{/other_user}",
"gists_url": "https://api.github.com/users/m-gallus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/m-gallus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/m-gallus/subscriptions",
"organizations_url": "https://api.github.com/users/m-gallus/orgs",
"repos_url": "https://api.github.com/users/m-gallus/repos",
"events_url": "https://api.github.com/users/m-gallus/events{/privacy}",
"received_events_url": "https://api.github.com/users/m-gallus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T15:30:07 | 2025-08-29T08:24:38 | 2025-08-12T14:30:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40038",
"html_url": "https://github.com/huggingface/transformers/pull/40038",
"diff_url": "https://github.com/huggingface/transformers/pull/40038.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40038.patch",
"merged_at": "2025-08-12T14:30:51"
} | # What does this PR do?
Currently PyTorch on Windows builds don't support distributed module and when users attempt to use transformers or a lib dependent on it, it fails with the following error:
```
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\tensor\__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\tensor\_ops\__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\tensor\_ops\_conv_ops.py", line 5, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\tensor\_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\tensor\placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\_functional_collectives.py", line 9, in <module>
import torch.distributed.distributed_c10d as c10d
File "C:\Users\Micha\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\distributed\distributed_c10d.py", line 23, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
```
This is caused by `model_debugging_utils.py` having an unguarded `import torch.distributed.tensor`. This PR ensures that the distributed module is available before including its tensor module.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40038/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40038/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40037 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40037/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40037/comments | https://api.github.com/repos/huggingface/transformers/issues/40037/events | https://github.com/huggingface/transformers/pull/40037 | 3,304,338,186 | PR_kwDOCUB6oc6iyC9f | 40,037 | fix `notification_service.py` about `time_spent` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T14:54:28 | 2025-08-08T15:11:17 | 2025-08-08T15:11:16 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40037",
"html_url": "https://github.com/huggingface/transformers/pull/40037",
"diff_url": "https://github.com/huggingface/transformers/pull/40037.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40037.patch",
"merged_at": "2025-08-08T15:11:16"
} | # What does this PR do?
We extract the information from test report file like in `1 passed in 7.89s`, but at some point, we do
> ... ["time_spent"] += time_spent[1:-1]
which remove the first digit. I am not guility! | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40037/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40036 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40036/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40036/comments | https://api.github.com/repos/huggingface/transformers/issues/40036/events | https://github.com/huggingface/transformers/pull/40036 | 3,304,334,242 | PR_kwDOCUB6oc6iyCFH | 40,036 | fix `notification_service.py` about `time_spent` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T14:53:15 | 2025-08-08T15:06:10 | 2025-08-08T14:53:49 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40036",
"html_url": "https://github.com/huggingface/transformers/pull/40036",
"diff_url": "https://github.com/huggingface/transformers/pull/40036.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40036.patch",
"merged_at": null
} | null | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40036/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40035/comments | https://api.github.com/repos/huggingface/transformers/issues/40035/events | https://github.com/huggingface/transformers/pull/40035 | 3,304,307,403 | PR_kwDOCUB6oc6ix8WV | 40,035 | Remove deprecated cache-related objects | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T14:43:52 | 2025-08-11T08:30:16 | 2025-08-11T08:30:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40035",
"html_url": "https://github.com/huggingface/transformers/pull/40035",
"diff_url": "https://github.com/huggingface/transformers/pull/40035.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40035.patch",
"merged_at": "2025-08-11T08:30:14"
} | # What does this PR do?
As per the title! Both the CacheConfigs and the KeyValuesWrapper were scheduled for deprecation next release | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40035/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40034/comments | https://api.github.com/repos/huggingface/transformers/issues/40034/events | https://github.com/huggingface/transformers/issues/40034 | 3,304,232,120 | I_kwDOCUB6oc7E8pS4 | 40,034 | `plamo-2-1b` broken on latest main | {
"login": "tdoublep",
"id": 7945038,
"node_id": "MDQ6VXNlcjc5NDUwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7945038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tdoublep",
"html_url": "https://github.com/tdoublep",
"followers_url": "https://api.github.com/users/tdoublep/followers",
"following_url": "https://api.github.com/users/tdoublep/following{/other_user}",
"gists_url": "https://api.github.com/users/tdoublep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tdoublep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tdoublep/subscriptions",
"organizations_url": "https://api.github.com/users/tdoublep/orgs",
"repos_url": "https://api.github.com/users/tdoublep/repos",
"events_url": "https://api.github.com/users/tdoublep/events{/privacy}",
"received_events_url": "https://api.github.com/users/tdoublep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-08T14:15:53 | 2025-09-02T14:24:07 | 2025-09-02T14:24:07 | NONE | null | null | null | null | ### System Info
```
- `transformers` version: 4.56.0.dev0
- Platform: Linux-5.15.0-143-generic-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA H100 80GB HBM3
```
### Who can help?
@Cyrilvallez @manueldeprada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("pfnet/plamo-2-1b", trust_remote_code=True, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("pfnet/plamo-2-1b", trust_remote_code=True)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model.generate(**encoded_input)
```
produces:
```
Traceback (most recent call last):
File "/home/user/vllm/tests/test_hf.py", line 8, in <module>
output = model.generate(**encoded_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/transformers/src/transformers/generation/utils.py", line 2522, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/user/transformers/src/transformers/generation/utils.py", line 3503, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/storage149/mnt/md0/user/modules/transformers_modules/pfnet/plamo-2-1b/a99ff56aee4f73b4e36e376c83130050d05dc178/modeling_plamo.py", line 1576, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/storage149/mnt/md0/user/modules/transformers_modules/pfnet/plamo-2-1b/a99ff56aee4f73b4e36e376c83130050d05dc178/modeling_plamo.py", line 1454, in forward
out = self.layers(
^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/storage149/mnt/md0/user/modules/transformers_modules/pfnet/plamo-2-1b/a99ff56aee4f73b4e36e376c83130050d05dc178/modeling_plamo.py", line 1281, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/storage149/mnt/md0/user/modules/transformers_modules/pfnet/plamo-2-1b/a99ff56aee4f73b4e36e376c83130050d05dc178/modeling_plamo.py", line 1206, in forward
hidden_states_sa, present_key_value = self.mixer(
^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniforge3/envs/dev-env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/net/storage149/mnt/md0/user/modules/transformers_modules/pfnet/plamo-2-1b/a99ff56aee4f73b4e36e376c83130050d05dc178/modeling_plamo.py", line 906, in forward
elif past_states[self.layer_idx] is None:
~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/home/user/transformers/src/transformers/cache_utils.py", line 939, in __getitem__
raise KeyError(
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'
```
### Expected behavior
Doesn't crash :) | {
"login": "tdoublep",
"id": 7945038,
"node_id": "MDQ6VXNlcjc5NDUwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7945038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tdoublep",
"html_url": "https://github.com/tdoublep",
"followers_url": "https://api.github.com/users/tdoublep/followers",
"following_url": "https://api.github.com/users/tdoublep/following{/other_user}",
"gists_url": "https://api.github.com/users/tdoublep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tdoublep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tdoublep/subscriptions",
"organizations_url": "https://api.github.com/users/tdoublep/orgs",
"repos_url": "https://api.github.com/users/tdoublep/repos",
"events_url": "https://api.github.com/users/tdoublep/events{/privacy}",
"received_events_url": "https://api.github.com/users/tdoublep/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40034/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40033/comments | https://api.github.com/repos/huggingface/transformers/issues/40033/events | https://github.com/huggingface/transformers/pull/40033 | 3,304,133,087 | PR_kwDOCUB6oc6ixWWn | 40,033 | Add model card for MobileViT | {
"login": "Shivamjan",
"id": 177928568,
"node_id": "U_kgDOCpr5eA",
"avatar_url": "https://avatars.githubusercontent.com/u/177928568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shivamjan",
"html_url": "https://github.com/Shivamjan",
"followers_url": "https://api.github.com/users/Shivamjan/followers",
"following_url": "https://api.github.com/users/Shivamjan/following{/other_user}",
"gists_url": "https://api.github.com/users/Shivamjan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shivamjan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shivamjan/subscriptions",
"organizations_url": "https://api.github.com/users/Shivamjan/orgs",
"repos_url": "https://api.github.com/users/Shivamjan/repos",
"events_url": "https://api.github.com/users/Shivamjan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shivamjan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T13:49:15 | 2025-08-13T07:54:05 | 2025-08-12T18:37:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40033",
"html_url": "https://github.com/huggingface/transformers/pull/40033",
"diff_url": "https://github.com/huggingface/transformers/pull/40033.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40033.patch",
"merged_at": "2025-08-12T18:37:00"
} | # What does this PR do?
This PR adds a detailed and beginner-friendly model card for MobileViT to the Hugging Face Transformers documentation. The previous model card was minimal and lacked clear explanations about the model architecture. This model retains several elements from the earlier version, as they remain applicable and effective for users.
The new version includes:
- A clear explanation of the MobileViT architecture.
- Notes on preprocessing and image format.
- Clarifies how to use the model for classification and segmentation.
- Highlights TensorFlow Lite compatibility for mobile use.
- Primary references to the original paper and related resources.
Fixes # (issue)
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40033/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40032/comments | https://api.github.com/repos/huggingface/transformers/issues/40032/events | https://github.com/huggingface/transformers/issues/40032 | 3,304,119,793 | I_kwDOCUB6oc7E8N3x | 40,032 | Add Padding Strategy to DataCollatorForLanguageModeling | {
"login": "rjgleaton",
"id": 70818603,
"node_id": "MDQ6VXNlcjcwODE4NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/70818603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rjgleaton",
"html_url": "https://github.com/rjgleaton",
"followers_url": "https://api.github.com/users/rjgleaton/followers",
"following_url": "https://api.github.com/users/rjgleaton/following{/other_user}",
"gists_url": "https://api.github.com/users/rjgleaton/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rjgleaton/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rjgleaton/subscriptions",
"organizations_url": "https://api.github.com/users/rjgleaton/orgs",
"repos_url": "https://api.github.com/users/rjgleaton/repos",
"events_url": "https://api.github.com/users/rjgleaton/events{/privacy}",
"received_events_url": "https://api.github.com/users/rjgleaton/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-08T13:46:00 | 2025-08-08T13:46:00 | null | CONTRIBUTOR | null | null | null | null | ### Feature request
Add the ability to specify a padding strategy when using `DataCollatorForLanguageModeling`
### Motivation
This is a minor QOL enhancement that makes the collator more consistent with others in the library. The main use case would probably be padding to max length to make memory usage more stable during training.
### Your contribution
I'll submit a PR in just a bit to add this :) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40032/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40031/comments | https://api.github.com/repos/huggingface/transformers/issues/40031/events | https://github.com/huggingface/transformers/issues/40031 | 3,304,096,734 | I_kwDOCUB6oc7E8IPe | 40,031 | [gpt-oss] MoE routing bug in the mxfp4 implementation (in distributed setting) | {
"login": "kitft",
"id": 58341426,
"node_id": "MDQ6VXNlcjU4MzQxNDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/58341426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kitft",
"html_url": "https://github.com/kitft",
"followers_url": "https://api.github.com/users/kitft/followers",
"following_url": "https://api.github.com/users/kitft/following{/other_user}",
"gists_url": "https://api.github.com/users/kitft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kitft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kitft/subscriptions",
"organizations_url": "https://api.github.com/users/kitft/orgs",
"repos_url": "https://api.github.com/users/kitft/repos",
"events_url": "https://api.github.com/users/kitft/events{/privacy}",
"received_events_url": "https://api.github.com/users/kitft/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-08T13:39:35 | 2025-08-19T14:35:15 | 2025-08-19T14:35:15 | NONE | null | null | null | null | ### System Info
```
- `transformers` version: 4.55.0
- Platform: Linux-6.11.11+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.6.1
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA B200
```
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running data-parallel on a multi-gpu node, and distributing with torchrun, I get nonsense generations when trying to run the new gpt-oss model with mxfp4, with the recommended triton kernels + latest version of PyTorch. This happens in an 8x node of H100s or B200s, I have not tested other settings.
I have no such issues if I run the model with kernels=true (which dequantises the model to bf16).
`torchrun --standalone --nproc_per_node=2 test_gpt.py`
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
from transformers.distributed import DistributedConfig
rank=int(os.environ.get("LOCAL_RANK"))
import torch
import torch.distributed as dist
# Initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=2)
model_id = "openai/gpt-oss-20b"
torch.cuda.set_device(rank)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map={"": rank}, #”auto” # either of these two choices leads to the issue!
)
messages = [
{"role": "user", "content": "How many rs are in the word 'strawberry'?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to(model.device)
generated = model.generate(**inputs, max_new_tokens=100, do_sample=True)
print(f'rank {rank} ------ {tokenizer.decode(generated[0][inputs["input_ids"].shape[-1]:])}')
```
———————————————————————————————————————
The output is:
```
rank 1 ------ <|message|> 1 that there can. that no you there, if for many or, no for many: (no). with, it are not, or.
not. and. etc..
or. or; The answer. not, and, etc. etc. or; ChatGPT,. A of Open AI, 3. 3. The Answer. not, and, do, no, where, to, no, and? This language...
. For many, and
rank 0 ------ <|channel|>analysis<|message|>We need to determine number of 'r's in word 'strawberry'. The word 'strawberry? Actually the word 'strawberry. The letters: s t r a w b e e? Wait no: 's'. Let's write: s t r a w b e? I'm going to check.
Word: 'strawberry': letters 's' 't' 'r' 'a' 'w' 'b' 'e' 'r'
```
I have localised the bug to https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/mxfp4.py, where the routing logic appears to be incorrectly attempting to do expert parallel if `dist.is_initialized()`. In my setting, forcibly disabling this branch resolves the issue:
```
if dist.is_available() and dist.is_initialized():
routing = routing_torch_dist
else:
routing = triton_kernels_hub.routing.routing
```
### Expected behavior
The generations should make sense, and be of equal quality across ranks. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40031/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40030/comments | https://api.github.com/repos/huggingface/transformers/issues/40030/events | https://github.com/huggingface/transformers/pull/40030 | 3,304,077,401 | PR_kwDOCUB6oc6ixJ32 | 40,030 | Update boxes expectations for OWLViT test | {
"login": "mihaidusmanu",
"id": 7276224,
"node_id": "MDQ6VXNlcjcyNzYyMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7276224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihaidusmanu",
"html_url": "https://github.com/mihaidusmanu",
"followers_url": "https://api.github.com/users/mihaidusmanu/followers",
"following_url": "https://api.github.com/users/mihaidusmanu/following{/other_user}",
"gists_url": "https://api.github.com/users/mihaidusmanu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihaidusmanu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihaidusmanu/subscriptions",
"organizations_url": "https://api.github.com/users/mihaidusmanu/orgs",
"repos_url": "https://api.github.com/users/mihaidusmanu/repos",
"events_url": "https://api.github.com/users/mihaidusmanu/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihaidusmanu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T13:33:15 | 2025-08-12T14:44:29 | 2025-08-12T14:03:38 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40030",
"html_url": "https://github.com/huggingface/transformers/pull/40030",
"diff_url": "https://github.com/huggingface/transformers/pull/40030.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40030.patch",
"merged_at": null
} | # What does this PR do?
While working on a related PR #40023, I noticed some OWLViT tests were failing on main on my local machine.
Seems to be some minor differences in the predicted boxes so I simply updated them to the latest values (the other outputs seem correct).
Not sure if this is hardware-related or something that slipped through at some point - I'm running on a RTX 4080 SUPER on WSL (pip torch 2.7.1 cuda 12.6, os cuda version 12.9)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Maybe @amyeroberts @qubvel @ArthurZucker | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40030/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40029/comments | https://api.github.com/repos/huggingface/transformers/issues/40029/events | https://github.com/huggingface/transformers/pull/40029 | 3,303,842,319 | PR_kwDOCUB6oc6iwYAK | 40,029 | Revert FA2 kwargs construction | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T12:16:20 | 2025-08-12T08:48:35 | 2025-08-12T08:48:35 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40029",
"html_url": "https://github.com/huggingface/transformers/pull/40029",
"diff_url": "https://github.com/huggingface/transformers/pull/40029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40029.patch",
"merged_at": "2025-08-12T08:48:35"
} | # What does this PR do?
As per title, discussed internally
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40029/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40028/comments | https://api.github.com/repos/huggingface/transformers/issues/40028/events | https://github.com/huggingface/transformers/issues/40028 | 3,303,756,086 | I_kwDOCUB6oc7E61E2 | 40,028 | `TypeError: 'builtins.safe_open' object is not iterable` in `load_pytorch_state_dict_in_tf2_model ` | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T11:51:28 | 2025-08-13T13:00:58 | 2025-08-08T12:57:43 | CONTRIBUTOR | null | null | null | null | ```
Traceback (most recent call last):
File "/home/runner/work/dev/dev/tests/transformers/helper.py", line 321, in <module>
prefetch_models()
File "/home/runner/work/dev/dev/tests/transformers/helper.py", line 317, in prefetch_models
func()
File "/home/runner/work/dev/dev/tests/helper_functions.py", line 672, in decorated_func
return test_func(*args, **kwargs)
File "/home/runner/work/dev/dev/tests/transformers/helper.py", line 46, in load_small_qa_tf_pipeline
model = transformers.TFAutoModelForQuestionAnswering.from_pretrained(architecture)
File "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained
return model_class.from_pretrained(
File "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 2964, in from_pretrained
return load_pytorch_state_dict_in_tf2_model(
File "/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/transformers/modeling_tf_pytorch_utils.py", line 333, in load_pytorch_state_dict_in_tf2_model
for key in pt_state_dict:
TypeError: 'builtins.safe_open' object is not iterable
```
Code to reproduce:
```python
import transformers
transformers.TFAutoModelForQuestionAnswering.from_pretrained("csarron/mobilebert-uncased-squad-v2")
```
Package versions:
- `transformers`: 4.55.0
- `safetensors`: 0.6.1 | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40028/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40027/comments | https://api.github.com/repos/huggingface/transformers/issues/40027/events | https://github.com/huggingface/transformers/pull/40027 | 3,303,727,480 | PR_kwDOCUB6oc6iv_7v | 40,027 | Add amd runners to run-slow command | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T11:42:12 | 2025-10-16T22:44:45 | 2025-10-16T22:44:45 | MEMBER | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40027",
"html_url": "https://github.com/huggingface/transformers/pull/40027",
"diff_url": "https://github.com/huggingface/transformers/pull/40027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40027.patch",
"merged_at": null
} | null | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40027/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40026/comments | https://api.github.com/repos/huggingface/transformers/issues/40026/events | https://github.com/huggingface/transformers/pull/40026 | 3,303,429,863 | PR_kwDOCUB6oc6ivFff | 40,026 | Bnb failling tests | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T09:58:31 | 2025-08-08T14:28:02 | 2025-08-08T14:28:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40026",
"html_url": "https://github.com/huggingface/transformers/pull/40026",
"diff_url": "https://github.com/huggingface/transformers/pull/40026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40026.patch",
"merged_at": "2025-08-08T14:28:00"
} | # What does this PR do?
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40026/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40025/comments | https://api.github.com/repos/huggingface/transformers/issues/40025/events | https://github.com/huggingface/transformers/pull/40025 | 3,303,411,344 | PR_kwDOCUB6oc6ivB2z | 40,025 | [GLM4V] fix vision placeholder mask | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-08T09:51:44 | 2025-08-11T06:36:20 | 2025-08-11T06:36:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40025",
"html_url": "https://github.com/huggingface/transformers/pull/40025",
"diff_url": "https://github.com/huggingface/transformers/pull/40025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40025.patch",
"merged_at": null
} | # What does this PR do?
As per title, GLM uses only `image_token_id` to denote both inputs and doesn't do mixed input inference | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40025/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40024/comments | https://api.github.com/repos/huggingface/transformers/issues/40024/events | https://github.com/huggingface/transformers/pull/40024 | 3,303,332,372 | PR_kwDOCUB6oc6iuykD | 40,024 | Fix missing None default values for Gemma3n model in get_placeholder_mask (#39991) | {
"login": "Znerual",
"id": 22452386,
"node_id": "MDQ6VXNlcjIyNDUyMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22452386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Znerual",
"html_url": "https://github.com/Znerual",
"followers_url": "https://api.github.com/users/Znerual/followers",
"following_url": "https://api.github.com/users/Znerual/following{/other_user}",
"gists_url": "https://api.github.com/users/Znerual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Znerual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Znerual/subscriptions",
"organizations_url": "https://api.github.com/users/Znerual/orgs",
"repos_url": "https://api.github.com/users/Znerual/repos",
"events_url": "https://api.github.com/users/Znerual/events{/privacy}",
"received_events_url": "https://api.github.com/users/Znerual/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-08T09:28:44 | 2025-08-08T15:09:06 | 2025-08-08T10:43:42 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40024",
"html_url": "https://github.com/huggingface/transformers/pull/40024",
"diff_url": "https://github.com/huggingface/transformers/pull/40024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40024.patch",
"merged_at": "2025-08-08T10:43:42"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40024/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40023/comments | https://api.github.com/repos/huggingface/transformers/issues/40023/events | https://github.com/huggingface/transformers/pull/40023 | 3,303,209,499 | PR_kwDOCUB6oc6iuZm2 | 40,023 | Add support for SDPA for OWLViT and OWLv2 | {
"login": "mihaidusmanu",
"id": 7276224,
"node_id": "MDQ6VXNlcjcyNzYyMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7276224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihaidusmanu",
"html_url": "https://github.com/mihaidusmanu",
"followers_url": "https://api.github.com/users/mihaidusmanu/followers",
"following_url": "https://api.github.com/users/mihaidusmanu/following{/other_user}",
"gists_url": "https://api.github.com/users/mihaidusmanu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihaidusmanu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihaidusmanu/subscriptions",
"organizations_url": "https://api.github.com/users/mihaidusmanu/orgs",
"repos_url": "https://api.github.com/users/mihaidusmanu/repos",
"events_url": "https://api.github.com/users/mihaidusmanu/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihaidusmanu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-08T08:51:31 | 2025-08-08T13:37:43 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40023",
"html_url": "https://github.com/huggingface/transformers/pull/40023",
"diff_url": "https://github.com/huggingface/transformers/pull/40023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40023.patch",
"merged_at": null
} | # What does this PR do?
Add support for SDPA (scaled_dot_product_attention) for efficient attention to OWLViT and OWLv2 models.
The previous code is used in the eager attention implementation. I roughly followed the SigLIP code for inspiration.
Note that we could do a larger refactory to use the is_causal flag, but I tried to stick as close as possible to the original implementation in this first version.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? > I added an e2e sdpa inference test based on the fp16 one, but let me know if anything else is needed.
## Who can review?
Maybe @amyeroberts @qubvel @ArthurZucker | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40023/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40022/comments | https://api.github.com/repos/huggingface/transformers/issues/40022/events | https://github.com/huggingface/transformers/pull/40022 | 3,302,740,419 | PR_kwDOCUB6oc6is6lS | 40,022 | fix: resolve dropout type error in DogeDecoder | {
"login": "wubingheng111",
"id": 123940419,
"node_id": "U_kgDOB2MuQw",
"avatar_url": "https://avatars.githubusercontent.com/u/123940419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wubingheng111",
"html_url": "https://github.com/wubingheng111",
"followers_url": "https://api.github.com/users/wubingheng111/followers",
"following_url": "https://api.github.com/users/wubingheng111/following{/other_user}",
"gists_url": "https://api.github.com/users/wubingheng111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wubingheng111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wubingheng111/subscriptions",
"organizations_url": "https://api.github.com/users/wubingheng111/orgs",
"repos_url": "https://api.github.com/users/wubingheng111/repos",
"events_url": "https://api.github.com/users/wubingheng111/events{/privacy}",
"received_events_url": "https://api.github.com/users/wubingheng111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-08T05:58:41 | 2025-08-12T13:12:21 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40022",
"html_url": "https://github.com/huggingface/transformers/pull/40022",
"diff_url": "https://github.com/huggingface/transformers/pull/40022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40022.patch",
"merged_at": null
} | Fix: #40079
Fixed TypeError where dropout() received tuple instead of Tensor in DogeDecoderLayer when using MoE configuration. The MLP forward method returns a tuple (hidden_states, router_logits) for MoE layers, but the subsequent dropout operation expected only a Tensor.
- Extract hidden_states from tuple before dropout when using MoE
- Ensure consistent tensor handling in both MLP and MoE configurations
Fixes issue where model.generate() failed with:
TypeError: dropout(): argument 'input' (position 1) must be Tensor, not tuple
@ArthurZucker @gante @LoserCheems | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40022/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40022/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40021/comments | https://api.github.com/repos/huggingface/transformers/issues/40021/events | https://github.com/huggingface/transformers/pull/40021 | 3,302,733,188 | PR_kwDOCUB6oc6is5Ln | 40,021 | [fix] batch inference for llava_onevision | {
"login": "cyr0930",
"id": 14088169,
"node_id": "MDQ6VXNlcjE0MDg4MTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/14088169?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyr0930",
"html_url": "https://github.com/cyr0930",
"followers_url": "https://api.github.com/users/cyr0930/followers",
"following_url": "https://api.github.com/users/cyr0930/following{/other_user}",
"gists_url": "https://api.github.com/users/cyr0930/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyr0930/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyr0930/subscriptions",
"organizations_url": "https://api.github.com/users/cyr0930/orgs",
"repos_url": "https://api.github.com/users/cyr0930/repos",
"events_url": "https://api.github.com/users/cyr0930/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyr0930/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T05:54:35 | 2025-08-12T10:58:06 | 2025-08-12T09:01:01 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40021",
"html_url": "https://github.com/huggingface/transformers/pull/40021",
"diff_url": "https://github.com/huggingface/transformers/pull/40021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40021.patch",
"merged_at": "2025-08-12T09:01:00"
} | # What does this PR do?
This PR recovers batch inference feature for llava_onevision and fixes some contents in documentation.
Before this commit, putting single-image examples and multi-image examples in the same batch did not work appropriately, as iterable is not consumed correctly.
And also make test case cover this issue.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40021/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40020/comments | https://api.github.com/repos/huggingface/transformers/issues/40020/events | https://github.com/huggingface/transformers/issues/40020 | 3,302,526,470 | I_kwDOCUB6oc7E2I4G | 40,020 | accelerate==1.10.0 and safetensors==0.6.1 are incompatible with transformers==4.53.1 | {
"login": "AniruddhaHumane",
"id": 22525550,
"node_id": "MDQ6VXNlcjIyNTI1NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/22525550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AniruddhaHumane",
"html_url": "https://github.com/AniruddhaHumane",
"followers_url": "https://api.github.com/users/AniruddhaHumane/followers",
"following_url": "https://api.github.com/users/AniruddhaHumane/following{/other_user}",
"gists_url": "https://api.github.com/users/AniruddhaHumane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AniruddhaHumane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AniruddhaHumane/subscriptions",
"organizations_url": "https://api.github.com/users/AniruddhaHumane/orgs",
"repos_url": "https://api.github.com/users/AniruddhaHumane/repos",
"events_url": "https://api.github.com/users/AniruddhaHumane/events{/privacy}",
"received_events_url": "https://api.github.com/users/AniruddhaHumane/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-08T03:53:57 | 2025-09-15T08:02:53 | 2025-09-15T08:02:53 | NONE | null | null | null | null | ### System Info
```Shell
accelerate==1.10.0
safetensors==0.6.1
transformers==4.53.1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install the above packages. It throws the following error:
> Traceback (most recent call last):
> File "<string>", line 1, in <module>
> ImportError: cannot import name 'AutoImageProcessor' from 'transformers' (/usr/lib/python3.10/site-packages/transformers/__init__.py)
### Expected behavior
`AutoImageProcessor` should be imported properly. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40020/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40019/comments | https://api.github.com/repos/huggingface/transformers/issues/40019/events | https://github.com/huggingface/transformers/pull/40019 | 3,302,471,087 | PR_kwDOCUB6oc6isFFm | 40,019 | Feat/add gpt oss sequence classification | {
"login": "robin-ede",
"id": 115729295,
"node_id": "U_kgDOBuXjjw",
"avatar_url": "https://avatars.githubusercontent.com/u/115729295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robin-ede",
"html_url": "https://github.com/robin-ede",
"followers_url": "https://api.github.com/users/robin-ede/followers",
"following_url": "https://api.github.com/users/robin-ede/following{/other_user}",
"gists_url": "https://api.github.com/users/robin-ede/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robin-ede/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robin-ede/subscriptions",
"organizations_url": "https://api.github.com/users/robin-ede/orgs",
"repos_url": "https://api.github.com/users/robin-ede/repos",
"events_url": "https://api.github.com/users/robin-ede/events{/privacy}",
"received_events_url": "https://api.github.com/users/robin-ede/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-08T03:11:50 | 2025-08-15T19:10:02 | 2025-08-15T19:10:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40019",
"html_url": "https://github.com/huggingface/transformers/pull/40019",
"diff_url": "https://github.com/huggingface/transformers/pull/40019.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40019.patch",
"merged_at": null
} | # What does this PR do?
This PR implements `GptOssForSequenceClassification` for text classification tasks.
**Key Changes:**
- ✅ **New Model Class**: Added `GptOssForSequenceClassification` inheriting from `GenericForSequenceClassification` and `GptOssPreTrainedModel`
- ✅ **Consistent Implementation**: Implemented in both `modeling_gpt_oss.py` (generated) and `modular_gpt_oss.py` (source) files
- ✅ **Auto-Model Integration**: Added to `modeling_auto.py` registry for automatic model discovery
- ✅ **Test Infrastructure**: Added `sequence_classification_class` to test configuration and enabled all sequence classification tests
- ✅ **Pipeline Support**: Configured for `text-classification` and `zero-shot` pipelines
**Implementation Details:**
- Follows HuggingFace's standard pattern used by other MoE models (Qwen2MoE, Mixtral)
- Uses `GenericForSequenceClassification` mixin for automatic classification head and forward pass logic
- Maintains compatibility with existing GptOss architecture and weight initialization
- Supports both single-label and multi-label classification scenarios
**Testing:**
- All sequence classification tests pass (3/3): basic, single-label, and multi-label variants
- Model instantiation and forward pass work correctly with proper logits shape `[batch_size, num_labels]`
- Integration with HuggingFace pipelines confirmed
Fixes #40018
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
**Link:** https://github.com/huggingface/transformers/issues/40018
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
**Yes:** Added `GptOssForSequenceClassification` to `docs/source/en/model_doc/gpt_oss.md` for auto-generated documentation
- [x] Did you write any new necessary tests?
**Yes:** Added `sequence_classification_class = GptOssForSequenceClassification` to test configuration, enabling all sequence classification tests
## Who can review?
@ArthurZucker - This adds a new text model class following standard patterns for sequence classification | {
"login": "robin-ede",
"id": 115729295,
"node_id": "U_kgDOBuXjjw",
"avatar_url": "https://avatars.githubusercontent.com/u/115729295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robin-ede",
"html_url": "https://github.com/robin-ede",
"followers_url": "https://api.github.com/users/robin-ede/followers",
"following_url": "https://api.github.com/users/robin-ede/following{/other_user}",
"gists_url": "https://api.github.com/users/robin-ede/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robin-ede/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robin-ede/subscriptions",
"organizations_url": "https://api.github.com/users/robin-ede/orgs",
"repos_url": "https://api.github.com/users/robin-ede/repos",
"events_url": "https://api.github.com/users/robin-ede/events{/privacy}",
"received_events_url": "https://api.github.com/users/robin-ede/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40019/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40019/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40018/comments | https://api.github.com/repos/huggingface/transformers/issues/40018/events | https://github.com/huggingface/transformers/issues/40018 | 3,302,280,724 | I_kwDOCUB6oc7E1M4U | 40,018 | need GptOssForSequenceClassification | {
"login": "cold-eye",
"id": 48782821,
"node_id": "MDQ6VXNlcjQ4NzgyODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/48782821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cold-eye",
"html_url": "https://github.com/cold-eye",
"followers_url": "https://api.github.com/users/cold-eye/followers",
"following_url": "https://api.github.com/users/cold-eye/following{/other_user}",
"gists_url": "https://api.github.com/users/cold-eye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cold-eye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cold-eye/subscriptions",
"organizations_url": "https://api.github.com/users/cold-eye/orgs",
"repos_url": "https://api.github.com/users/cold-eye/repos",
"events_url": "https://api.github.com/users/cold-eye/events{/privacy}",
"received_events_url": "https://api.github.com/users/cold-eye/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-08-08T00:46:48 | 2025-08-19T11:54:39 | 2025-08-19T11:54:39 | NONE | null | null | null | null | ### Feature request
need GptOssForSequenceClassification
### Motivation
for text classificition
### Your contribution
nothing | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40018/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40017/comments | https://api.github.com/repos/huggingface/transformers/issues/40017/events | https://github.com/huggingface/transformers/issues/40017 | 3,302,045,171 | I_kwDOCUB6oc7E0TXz | 40,017 | Major issues with transformers version causing rubbish generations with Gemma3 family using vllm | {
"login": "AbdelrahmanHagrass",
"id": 48356468,
"node_id": "MDQ6VXNlcjQ4MzU2NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48356468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AbdelrahmanHagrass",
"html_url": "https://github.com/AbdelrahmanHagrass",
"followers_url": "https://api.github.com/users/AbdelrahmanHagrass/followers",
"following_url": "https://api.github.com/users/AbdelrahmanHagrass/following{/other_user}",
"gists_url": "https://api.github.com/users/AbdelrahmanHagrass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AbdelrahmanHagrass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AbdelrahmanHagrass/subscriptions",
"organizations_url": "https://api.github.com/users/AbdelrahmanHagrass/orgs",
"repos_url": "https://api.github.com/users/AbdelrahmanHagrass/repos",
"events_url": "https://api.github.com/users/AbdelrahmanHagrass/events{/privacy}",
"received_events_url": "https://api.github.com/users/AbdelrahmanHagrass/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T22:18:29 | 2025-08-08T12:22:56 | 2025-08-08T12:22:56 | NONE | null | null | null | null | ### System Info
.
### Who can help?
Major issues with transformers version when used with Gemma3 family models on vllm. The output generations are incorrect and not usable—appears to be due to incompatibility or regression in transformers. Please advise on compatible versions or fixes.
Example: Generations are rubbish or meaningless compared to expected outputs.
Tested with latest vllm and Gemma3 models.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
. | {
"login": "AbdelrahmanHagrass",
"id": 48356468,
"node_id": "MDQ6VXNlcjQ4MzU2NDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48356468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AbdelrahmanHagrass",
"html_url": "https://github.com/AbdelrahmanHagrass",
"followers_url": "https://api.github.com/users/AbdelrahmanHagrass/followers",
"following_url": "https://api.github.com/users/AbdelrahmanHagrass/following{/other_user}",
"gists_url": "https://api.github.com/users/AbdelrahmanHagrass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AbdelrahmanHagrass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AbdelrahmanHagrass/subscriptions",
"organizations_url": "https://api.github.com/users/AbdelrahmanHagrass/orgs",
"repos_url": "https://api.github.com/users/AbdelrahmanHagrass/repos",
"events_url": "https://api.github.com/users/AbdelrahmanHagrass/events{/privacy}",
"received_events_url": "https://api.github.com/users/AbdelrahmanHagrass/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40017/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40016/comments | https://api.github.com/repos/huggingface/transformers/issues/40016/events | https://github.com/huggingface/transformers/pull/40016 | 3,301,927,221 | PR_kwDOCUB6oc6iqX1G | 40,016 | [WIP] Fix naive for loops for MoE models resulting in sub 20% downstream MFU for training with trl, e.t.c (Qwen3, Deepseek V3, Ernie 4.5, GLM 4.5, Dots1) | {
"login": "perinmclaughlin",
"id": 7523023,
"node_id": "MDQ6VXNlcjc1MjMwMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7523023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perinmclaughlin",
"html_url": "https://github.com/perinmclaughlin",
"followers_url": "https://api.github.com/users/perinmclaughlin/followers",
"following_url": "https://api.github.com/users/perinmclaughlin/following{/other_user}",
"gists_url": "https://api.github.com/users/perinmclaughlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perinmclaughlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perinmclaughlin/subscriptions",
"organizations_url": "https://api.github.com/users/perinmclaughlin/orgs",
"repos_url": "https://api.github.com/users/perinmclaughlin/repos",
"events_url": "https://api.github.com/users/perinmclaughlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/perinmclaughlin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T21:30:05 | 2025-09-04T03:27:04 | 2025-08-13T01:21:25 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40016",
"html_url": "https://github.com/huggingface/transformers/pull/40016",
"diff_url": "https://github.com/huggingface/transformers/pull/40016.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40016.patch",
"merged_at": null
} | # What does this PR do?
Fixes the longstanding issues with MoE training being bottlenecked by naive for loops for models with > 8 experts.
This can result in sub 20% MFU in downstream training frameworks such as unsloth and trl. (Qwen3 30B on H800)
There have been several downstream issues already from training frameworks such as https://github.com/unslothai/unsloth/issues/2582, and open source community members have made custom patches such as https://huggingface.co/Doctor-Shotgun/Qwen3-235B-A22B-Instruct-2507-ScatterMoE. Although not publicly available, I've also heard several complaints in the Axolotl and BeaverAI discords on this issue.
This PR mainly replaces the moe() method from Deepseek V3 with the mathematically equivalent but faster Scatter MoE implementation and makes other sparse moe blocks inherit from DeepseekV3MoE in addition to accordingly modifying the forward and init of those modules to use moe()
Also, from modular_deepseek_v3.py:
"""
CALL FOR CONTRIBUTION! I don't have time to optimise this right now, but expert weights need to be fused
to not have to do a loop here (deepseek has 256 experts soooo yeah).
"""
## Before submitting
- [N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [Y] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Models:
- text models: @ArthurZucker
| {
"login": "perinmclaughlin",
"id": 7523023,
"node_id": "MDQ6VXNlcjc1MjMwMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7523023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/perinmclaughlin",
"html_url": "https://github.com/perinmclaughlin",
"followers_url": "https://api.github.com/users/perinmclaughlin/followers",
"following_url": "https://api.github.com/users/perinmclaughlin/following{/other_user}",
"gists_url": "https://api.github.com/users/perinmclaughlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/perinmclaughlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/perinmclaughlin/subscriptions",
"organizations_url": "https://api.github.com/users/perinmclaughlin/orgs",
"repos_url": "https://api.github.com/users/perinmclaughlin/repos",
"events_url": "https://api.github.com/users/perinmclaughlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/perinmclaughlin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40016/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40016/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40015/comments | https://api.github.com/repos/huggingface/transformers/issues/40015/events | https://github.com/huggingface/transformers/pull/40015 | 3,301,811,984 | PR_kwDOCUB6oc6ip_2W | 40,015 | Update expected output values after #39885 (part 2) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T20:42:51 | 2025-08-07T20:56:28 | 2025-08-07T20:52:53 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40015",
"html_url": "https://github.com/huggingface/transformers/pull/40015",
"diff_url": "https://github.com/huggingface/transformers/pull/40015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40015.patch",
"merged_at": "2025-08-07T20:52:53"
} | # What does this PR do?
The changes are expected.
I also change the atol and rtol to 1e-4 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40015/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40014/comments | https://api.github.com/repos/huggingface/transformers/issues/40014/events | https://github.com/huggingface/transformers/pull/40014 | 3,301,573,959 | PR_kwDOCUB6oc6ipPWW | 40,014 | docs: fix duplication in 'en/optimizers.md' | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T19:01:56 | 2025-08-07T20:28:44 | 2025-08-07T20:28:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40014",
"html_url": "https://github.com/huggingface/transformers/pull/40014",
"diff_url": "https://github.com/huggingface/transformers/pull/40014.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40014.patch",
"merged_at": "2025-08-07T20:28:43"
} | # What does this PR do?
This PR fixes a minor duplication in the code:
- "gradient_checkpointing=True"
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu
@Rocketknight1
Thank you!
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40014/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40014/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40013/comments | https://api.github.com/repos/huggingface/transformers/issues/40013/events | https://github.com/huggingface/transformers/pull/40013 | 3,301,568,753 | PR_kwDOCUB6oc6ipOQj | 40,013 | pin torchcodec==0.5.0 for now with torch 2.7.1 on daily CI | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T18:59:37 | 2025-08-07T21:05:41 | 2025-08-07T21:05:39 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40013",
"html_url": "https://github.com/huggingface/transformers/pull/40013",
"diff_url": "https://github.com/huggingface/transformers/pull/40013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40013.patch",
"merged_at": "2025-08-07T21:05:39"
} | # What does this PR do?
Will unpin this weekend. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40013/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40012/comments | https://api.github.com/repos/huggingface/transformers/issues/40012/events | https://github.com/huggingface/transformers/pull/40012 | 3,301,539,912 | PR_kwDOCUB6oc6ipIc7 | 40,012 | unpin torch<2.8 on circleci | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T18:47:12 | 2025-08-07T19:31:19 | 2025-08-07T19:31:17 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40012",
"html_url": "https://github.com/huggingface/transformers/pull/40012",
"diff_url": "https://github.com/huggingface/transformers/pull/40012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40012.patch",
"merged_at": "2025-08-07T19:31:17"
} | # What does this PR do?
revert #39951, as torchcodec 2.6.0 is released.
(without this unpin, we will get errors with torchcodec 2.6.0 + torch 2.7.1) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40012/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40011/comments | https://api.github.com/repos/huggingface/transformers/issues/40011/events | https://github.com/huggingface/transformers/pull/40011 | 3,301,522,407 | PR_kwDOCUB6oc6ipE6d | 40,011 | 🌐 [i18n-KO] Translated `optimizers.md` to Korean | {
"login": "chelsseeey",
"id": 152389483,
"node_id": "U_kgDOCRVHaw",
"avatar_url": "https://avatars.githubusercontent.com/u/152389483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chelsseeey",
"html_url": "https://github.com/chelsseeey",
"followers_url": "https://api.github.com/users/chelsseeey/followers",
"following_url": "https://api.github.com/users/chelsseeey/following{/other_user}",
"gists_url": "https://api.github.com/users/chelsseeey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chelsseeey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chelsseeey/subscriptions",
"organizations_url": "https://api.github.com/users/chelsseeey/orgs",
"repos_url": "https://api.github.com/users/chelsseeey/repos",
"events_url": "https://api.github.com/users/chelsseeey/events{/privacy}",
"received_events_url": "https://api.github.com/users/chelsseeey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T18:39:32 | 2025-08-13T17:00:47 | 2025-08-13T17:00:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40011",
"html_url": "https://github.com/huggingface/transformers/pull/40011",
"diff_url": "https://github.com/huggingface/transformers/pull/40011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40011.patch",
"merged_at": "2025-08-13T17:00:47"
} | # What does this PR do?
Translated the `optimizers.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [X] Check for missing / redundant translations (번역 누락/중복 검사)
- [X] Grammar Check (맞춤법 검사)
- [X] Review or Add new terms to glossary (용어 확인 및 추가)
- [X] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [X] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd, @luckyvickyricky, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S
<!-- @harheem, @nsbg, @Youngdong2, @xhaktm00, @ssunbear, @ChoHyoungSeo, @judy-choi -->
<!-- @4N3MONE, @Kim-Ju-won, @ahnjj, @FacerAin, @ssum21, @TaskerJang, @HyunZ118 -->
<!-- @yijun-lee, @songi104, @chhaewxn, @AhnJoonSung, @jihyun-0611, @seopp, @pyapyapya -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40011/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40010/comments | https://api.github.com/repos/huggingface/transformers/issues/40010/events | https://github.com/huggingface/transformers/issues/40010 | 3,301,318,909 | I_kwDOCUB6oc7ExiD9 | 40,010 | Customizable Logit Warping Strategies for Generation | {
"login": "PamelaBha",
"id": 219210686,
"node_id": "U_kgDODRDjvg",
"avatar_url": "https://avatars.githubusercontent.com/u/219210686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PamelaBha",
"html_url": "https://github.com/PamelaBha",
"followers_url": "https://api.github.com/users/PamelaBha/followers",
"following_url": "https://api.github.com/users/PamelaBha/following{/other_user}",
"gists_url": "https://api.github.com/users/PamelaBha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PamelaBha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PamelaBha/subscriptions",
"organizations_url": "https://api.github.com/users/PamelaBha/orgs",
"repos_url": "https://api.github.com/users/PamelaBha/repos",
"events_url": "https://api.github.com/users/PamelaBha/events{/privacy}",
"received_events_url": "https://api.github.com/users/PamelaBha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-07T17:24:05 | 2025-10-13T20:43:45 | null | NONE | null | null | null | null | ### Feature request
Improve the generate() API by supporting custom, declarative logit warping strategies. Make it easier for users to plug in standard and custom LogitProcessors via configuration or arguments without needing to subclass or dive into internals.
### Motivation
The generation module already supports rich logit manipulation through LogitProcessorList, but:
- It is undocumented and hard to use for casual users
- Requires advanced subclassing to customize behaviors (e.g., word bans, domain constraints)
- Doesn’t support JSON- or dict-style configuration like many other parts of Transformers
Making logit warping more accessible enables:
- Prompt engineers and power users to fine-tune generation behavior
- Safer generation via blacklists or probability shifting
- Dynamic controls like repetition penalties or temperature annealing
### Your contribution
- Introduce a registry of LogitProcessors exposed to users via a high-level interface
- Support a new logit_processors argument in pipeline() and generate()
- Accepts a list of dicts specifying processor type and parameters
- Extend LogitProcessorList to be constructed from configs | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40010/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40010/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40009/comments | https://api.github.com/repos/huggingface/transformers/issues/40009/events | https://github.com/huggingface/transformers/pull/40009 | 3,301,296,073 | PR_kwDOCUB6oc6ioUkT | 40,009 | feat: extract rev in attn_implementation kernels via @ | {
"login": "drbh",
"id": 9896130,
"node_id": "MDQ6VXNlcjk4OTYxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9896130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drbh",
"html_url": "https://github.com/drbh",
"followers_url": "https://api.github.com/users/drbh/followers",
"following_url": "https://api.github.com/users/drbh/following{/other_user}",
"gists_url": "https://api.github.com/users/drbh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drbh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drbh/subscriptions",
"organizations_url": "https://api.github.com/users/drbh/orgs",
"repos_url": "https://api.github.com/users/drbh/repos",
"events_url": "https://api.github.com/users/drbh/events{/privacy}",
"received_events_url": "https://api.github.com/users/drbh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T17:16:09 | 2025-08-11T19:14:13 | 2025-08-11T19:14:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40009",
"html_url": "https://github.com/huggingface/transformers/pull/40009",
"diff_url": "https://github.com/huggingface/transformers/pull/40009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40009.patch",
"merged_at": "2025-08-11T19:14:13"
} | This PR adds the ability to specify kernel revisions via the `@` symbol in in the `attn_implementation` in `AutoModelForCausalLM.from_pretrained`
### Example usage
```bash
uv run repro.py
```
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "accelerate",
# "torch==2.7.0",
# "transformers",
# "kernels>=0.9.0",
# ]
#
# [tool.uv.sources]
# transformers = { git = "https://github.com/drbh/transformers.git", branch = "allow-kernel-rev" }
# ///
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
torch.set_float32_matmul_precision("high")
model_id = "meta-llama/Llama-3.2-3b-Instruct"
model = (
AutoModelForCausalLM.from_pretrained(
model_id,
# attn_implementation="kernels-community/flash-attn@main",
# attn_implementation="kernels-community/flash-attn@56449c1aa267bd0f48a191f0e6979dedf9f2ec32", # (mains sha)
attn_implementation="kernels-community/flash-attn@09eec95", # main-1 commit sha
torch_dtype=torch.bfloat16,
)
.eval()
.cuda()
)
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")
print("+ Model loaded successfully.")
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.95,
top_k=50,
num_beams=1,
max_new_tokens=50,
do_sample=True,
seed=42,
)
start_time = time.time()
with torch.inference_mode():
outputs = model.generate(
**inputs,
generation_config=generation_config,
)
end_time = time.time()
print(f"+ Generation time: {end_time - start_time:.2f} seconds")
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
print("+ Inference completed successfully.")
``` | {
"login": "drbh",
"id": 9896130,
"node_id": "MDQ6VXNlcjk4OTYxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9896130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drbh",
"html_url": "https://github.com/drbh",
"followers_url": "https://api.github.com/users/drbh/followers",
"following_url": "https://api.github.com/users/drbh/following{/other_user}",
"gists_url": "https://api.github.com/users/drbh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drbh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drbh/subscriptions",
"organizations_url": "https://api.github.com/users/drbh/orgs",
"repos_url": "https://api.github.com/users/drbh/repos",
"events_url": "https://api.github.com/users/drbh/events{/privacy}",
"received_events_url": "https://api.github.com/users/drbh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40009/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40009/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40008/comments | https://api.github.com/repos/huggingface/transformers/issues/40008/events | https://github.com/huggingface/transformers/pull/40008 | 3,301,222,762 | PR_kwDOCUB6oc6ioFPH | 40,008 | Fixes for EncoderDecoderCache | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T16:50:59 | 2025-08-18T15:51:06 | 2025-08-18T15:51:06 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40008",
"html_url": "https://github.com/huggingface/transformers/pull/40008",
"diff_url": "https://github.com/huggingface/transformers/pull/40008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40008.patch",
"merged_at": "2025-08-18T15:51:05"
} | The `EncoderDecoderCache` object is not compatible with `nn.DataParallel` because it expects to be instantiated with 2 arguments. This probably was not an issue before because the legacy cache was a tuple of tuples (thus compatible with `nn.DataParallel.gather`) but it is now.
This PR proposes a fix by changing the `EncoderDecoderCache.__init__` to make it more flexible: it retrieves all passed arguments using `*caches` and expects either:
- 2 arguments, which are 2 `Cache` objects, as is the case today;
- 1 argument, which is an iterable of `tuple[Tensor.floatTensor, ...]` compatible with the legacy cache but also `nn.DataParallel`, which gathers object using `EncoderDecoderCache(map(gathered_past_key_value))` -> hence the `create_dynamic_caches_from_legacy_cache` was slightly changed to support non-indexable iterables
For other numbers of arguments, it fails.
There was also a line `cross_attention_cache.is_updated[layer_idx] = True` which was removed because if `cross_attention_cache` has no `is_updated` attribute it fails, eg. a `DynamicCache` object.
The drawback of this is these changes is that initialization of `EncoderDecoderCache` is no longer possible (it was done once in the codebase and has been fixed) and the inside mechanism is a bit more convoluted than before.
This also has the benefit of fixing one `t5` test that was failing (`test_multi_gpu_data_parallel_forward`) and not skipping the same test in `t5gemma` | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40008/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40008/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40007/comments | https://api.github.com/repos/huggingface/transformers/issues/40007/events | https://github.com/huggingface/transformers/pull/40007 | 3,301,163,932 | PR_kwDOCUB6oc6in4v1 | 40,007 | 🚨 Use lru_cache for sine pos embeddings MaskFormer | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6886428489,
"node_id": "LA_kwDOCUB6oc8AAAABmnaPSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/run-slow",
"name": "run-slow",
"color": "E1D519",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-08-07T16:31:55 | 2025-08-13T17:05:23 | 2025-08-13T17:05:23 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40007",
"html_url": "https://github.com/huggingface/transformers/pull/40007",
"diff_url": "https://github.com/huggingface/transformers/pull/40007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40007.patch",
"merged_at": "2025-08-13T17:05:22"
} | # What does this PR do?
Since sine pos embeddings only depends on a fixed feature shape (for these models at least), this seems like a free speedup. Changed Maskformer and related only for now, as its sine pos embedding module is used in sam2, but happy to extend this to other models in the library!
The speed gains are not negligible, for sam2_video on my machine, it goes from 24.4 to 25.8 fps, a 6% speed improvement
| {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40007/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40006/comments | https://api.github.com/repos/huggingface/transformers/issues/40006/events | https://github.com/huggingface/transformers/pull/40006 | 3,301,127,047 | PR_kwDOCUB6oc6inw89 | 40,006 | Fix PerceptionLM image preprocessing for non-tiled image input. | {
"login": "shuminghu",
"id": 2934295,
"node_id": "MDQ6VXNlcjI5MzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2934295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuminghu",
"html_url": "https://github.com/shuminghu",
"followers_url": "https://api.github.com/users/shuminghu/followers",
"following_url": "https://api.github.com/users/shuminghu/following{/other_user}",
"gists_url": "https://api.github.com/users/shuminghu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuminghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuminghu/subscriptions",
"organizations_url": "https://api.github.com/users/shuminghu/orgs",
"repos_url": "https://api.github.com/users/shuminghu/repos",
"events_url": "https://api.github.com/users/shuminghu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuminghu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T16:17:59 | 2025-08-12T08:41:01 | 2025-08-12T08:40:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40006",
"html_url": "https://github.com/huggingface/transformers/pull/40006",
"diff_url": "https://github.com/huggingface/transformers/pull/40006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40006.patch",
"merged_at": "2025-08-12T08:40:22"
} | Add support for vanilla image that only has C,H,W dims but not tiles dim.
This is non-default image shapes used in PLM but it's useful in demos and low-resoure devices.
@zucchini-nlp | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40006/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40005/comments | https://api.github.com/repos/huggingface/transformers/issues/40005/events | https://github.com/huggingface/transformers/pull/40005 | 3,300,952,108 | PR_kwDOCUB6oc6inMLi | 40,005 | [fix] Pass video inputs to plm | {
"login": "4g",
"id": 664530,
"node_id": "MDQ6VXNlcjY2NDUzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/664530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4g",
"html_url": "https://github.com/4g",
"followers_url": "https://api.github.com/users/4g/followers",
"following_url": "https://api.github.com/users/4g/following{/other_user}",
"gists_url": "https://api.github.com/users/4g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4g/subscriptions",
"organizations_url": "https://api.github.com/users/4g/orgs",
"repos_url": "https://api.github.com/users/4g/repos",
"events_url": "https://api.github.com/users/4g/events{/privacy}",
"received_events_url": "https://api.github.com/users/4g/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T15:24:09 | 2025-08-08T04:24:38 | 2025-08-07T15:25:07 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40005",
"html_url": "https://github.com/huggingface/transformers/pull/40005",
"diff_url": "https://github.com/huggingface/transformers/pull/40005.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40005.patch",
"merged_at": null
} | 1. video_inputs were not being passed to plm, resulting in same results for all videos.
2. This was breaking the official example. more @ https://github.com/huggingface/transformers/issues/40004
3. tested locally with different videos
# Pass video inputs to plm
Fixes # [40004](https://github.com/huggingface/transformers/issues/40004)
## Who can review?
@zucchini-nlp
| {
"login": "4g",
"id": 664530,
"node_id": "MDQ6VXNlcjY2NDUzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/664530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4g",
"html_url": "https://github.com/4g",
"followers_url": "https://api.github.com/users/4g/followers",
"following_url": "https://api.github.com/users/4g/following{/other_user}",
"gists_url": "https://api.github.com/users/4g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4g/subscriptions",
"organizations_url": "https://api.github.com/users/4g/orgs",
"repos_url": "https://api.github.com/users/4g/repos",
"events_url": "https://api.github.com/users/4g/events{/privacy}",
"received_events_url": "https://api.github.com/users/4g/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40005/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40004/comments | https://api.github.com/repos/huggingface/transformers/issues/40004/events | https://github.com/huggingface/transformers/issues/40004 | 3,300,810,266 | I_kwDOCUB6oc7Evl4a | 40,004 | video_inputs are not passed to perception_lm | {
"login": "4g",
"id": 664530,
"node_id": "MDQ6VXNlcjY2NDUzMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/664530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/4g",
"html_url": "https://github.com/4g",
"followers_url": "https://api.github.com/users/4g/followers",
"following_url": "https://api.github.com/users/4g/following{/other_user}",
"gists_url": "https://api.github.com/users/4g/gists{/gist_id}",
"starred_url": "https://api.github.com/users/4g/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/4g/subscriptions",
"organizations_url": "https://api.github.com/users/4g/orgs",
"repos_url": "https://api.github.com/users/4g/repos",
"events_url": "https://api.github.com/users/4g/events{/privacy}",
"received_events_url": "https://api.github.com/users/4g/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T14:44:49 | 2025-08-07T18:22:08 | 2025-08-07T18:22:08 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Platform: Linux-6.14.0-27-generic-x86_64-with-glibc2.39
- Python version: 3.11.9
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.4.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: 0.17.1
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoProcessor, AutoModelForImageTextToText
from huggingface_hub import hf_hub_download
MODEL_PATH = "facebook/Perception-LM-1B"
processor = AutoProcessor.from_pretrained(MODEL_PATH, use_fast=True)
model = AutoModelForImageTextToText.from_pretrained(MODEL_PATH).to("cuda")
video_file = hf_hub_download(
repo_id="shumingh/perception_lm_test_videos",
filename="GUWR5TyiY-M_000012_000022.mp4",
repo_type="dataset",
)
conversation = [
{
"role": "user",
"content": [
{
"type": "video",
"url": video_file,
},
{"type": "text", "text": "Can you describe the video in detail?"},
],
}
]
inputs = processor.apply_chat_template(
[conversation],
num_frames=32,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
video_load_backend="decord",
)
inputs = inputs.to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=256)
input_length = inputs["input_ids"].shape[1]
generate_ids_without_inputs = generate_ids[:, input_length:]
for output in processor.batch_decode(
generate_ids_without_inputs, skip_special_tokens=True
):
print(output)
print(inputs.pixel_values_videos)
```
### Expected behavior
Error happens in the [official example](https://huggingface.co/facebook/Perception-LM-1B) on using PLM with video inputs . The output has no correlation to input, because `pixel_values_videos` key is missing from inputs. I believe this error was introduced in [this commit](https://github.com/huggingface/transformers/commit/947a37e8f5bc50bc0e9a77c0d16b038adcb056d0)
when `videos_inputs` was removed from `BatchFeature`.
When passed video in conversaion, inputs only have these keys :
{'input_ids', 'attention_mask'} | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40004/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40003/comments | https://api.github.com/repos/huggingface/transformers/issues/40003/events | https://github.com/huggingface/transformers/pull/40003 | 3,300,791,406 | PR_kwDOCUB6oc6imqgX | 40,003 | fix: remove CHAT_TEMPLATE import in tests for deepseek-vl | {
"login": "geetu040",
"id": 90601662,
"node_id": "MDQ6VXNlcjkwNjAxNjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/90601662?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/geetu040",
"html_url": "https://github.com/geetu040",
"followers_url": "https://api.github.com/users/geetu040/followers",
"following_url": "https://api.github.com/users/geetu040/following{/other_user}",
"gists_url": "https://api.github.com/users/geetu040/gists{/gist_id}",
"starred_url": "https://api.github.com/users/geetu040/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/geetu040/subscriptions",
"organizations_url": "https://api.github.com/users/geetu040/orgs",
"repos_url": "https://api.github.com/users/geetu040/repos",
"events_url": "https://api.github.com/users/geetu040/events{/privacy}",
"received_events_url": "https://api.github.com/users/geetu040/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T14:39:43 | 2025-08-07T16:20:05 | 2025-08-07T16:19:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40003",
"html_url": "https://github.com/huggingface/transformers/pull/40003",
"diff_url": "https://github.com/huggingface/transformers/pull/40003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40003.patch",
"merged_at": "2025-08-07T16:19:37"
} | # What does this PR do?
Fixes #39966
This PR removes the `CHAT_TEMPLATE` imports from `test_processing_deepseek_vl.py` and `test_processing_deepseek_vl_hybrid.py`. These imports were referencing weight conversion scripts that are not included in the PyPI distribution, which causes the tests to fail in the v4.55.0 release.
The fix aligns with the approach used in [test_processing_emu3.py](https://github.com/huggingface/transformers/blob/main/tests/models/emu3/test_processing_emu3.py)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you write any new necessary tests?
## Who can review?
@rasmi @zucchini-nlp @ArthurZucker | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40003/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40002/comments | https://api.github.com/repos/huggingface/transformers/issues/40002/events | https://github.com/huggingface/transformers/pull/40002 | 3,300,782,135 | PR_kwDOCUB6oc6imoho | 40,002 | [`Flash Attention`] Fix flash attention integration | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-07T14:37:09 | 2025-08-12T20:36:52 | 2025-08-12T15:24:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40002",
"html_url": "https://github.com/huggingface/transformers/pull/40002",
"diff_url": "https://github.com/huggingface/transformers/pull/40002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40002.patch",
"merged_at": "2025-08-12T15:24:10"
} | The current flash attention implementation has several issues:
- `test_flash_attn_2_equivalence` was failing in all models (I think)
- Fa kwargs no longer had `max_length_q/k`
- Varlen has been handled incorrectly leading to errors by always opting to this path even if this was not the case (pos ids but no attention mask)
- Compiling was failing due to the usage of `globals()`
- `fa_kwargs` did not get the correct dtype as cat recasted to long apparently (fixed by #39843)
- `test_flash_attn_2_inference_equivalence` fails on a lot of models (e.g. Bart) due to some refactor on the test which is not compatible with those models it seems
- fullgraph compile did not work before
- Small fixes on #40029 according to Raushan's comments at https://github.com/huggingface/transformers/pull/40002#discussion_r2269873268
To consider:
- Supports top left mask should be deprecated but some models still depend on it so leaving it with a TODO
- ^ same for is_flash_attention available (we know lazy load on setting the attention implementation)
Discovered internally as well in #38188 for example | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40002/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 6,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/40002/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40001/comments | https://api.github.com/repos/huggingface/transformers/issues/40001/events | https://github.com/huggingface/transformers/issues/40001 | 3,300,732,592 | I_kwDOCUB6oc7EvS6w | 40,001 | Possible wrong init call | {
"login": "zhizhongli-sony",
"id": 165778602,
"node_id": "U_kgDOCeGUqg",
"avatar_url": "https://avatars.githubusercontent.com/u/165778602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhizhongli-sony",
"html_url": "https://github.com/zhizhongli-sony",
"followers_url": "https://api.github.com/users/zhizhongli-sony/followers",
"following_url": "https://api.github.com/users/zhizhongli-sony/following{/other_user}",
"gists_url": "https://api.github.com/users/zhizhongli-sony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhizhongli-sony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhizhongli-sony/subscriptions",
"organizations_url": "https://api.github.com/users/zhizhongli-sony/orgs",
"repos_url": "https://api.github.com/users/zhizhongli-sony/repos",
"events_url": "https://api.github.com/users/zhizhongli-sony/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhizhongli-sony/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T14:22:51 | 2025-10-05T08:03:00 | 2025-10-05T08:03:00 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.30.1
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.16.4
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
trainer: @zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://github.com/huggingface/transformers/blob/43d3b1931a7d3cddac9947adcb19bb3b1f8abedb/src/transformers/modeling_utils.py#L5382
The function name is init missing keys, but the argument given is all keys.
### Expected behavior
checkpoint_keys -> missing_keys + mismatched_keys | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40001/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40001/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40000/comments | https://api.github.com/repos/huggingface/transformers/issues/40000/events | https://github.com/huggingface/transformers/pull/40000 | 3,300,715,180 | PR_kwDOCUB6oc6imaWT | 40,000 | Fix an annoying flaky test | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T14:18:08 | 2025-08-12T13:19:51 | 2025-08-08T08:32:51 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40000",
"html_url": "https://github.com/huggingface/transformers/pull/40000",
"diff_url": "https://github.com/huggingface/transformers/pull/40000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40000.patch",
"merged_at": "2025-08-08T08:32:51"
} | # What does this PR do?
As per title, trying to download the image from `picsum` times out if we run the test many times, so I moved the image to HF hub. AFAIK we can make many subsequent requests to hub downloads when testing | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40000/reactions",
"total_count": 6,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40000/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39999/comments | https://api.github.com/repos/huggingface/transformers/issues/39999/events | https://github.com/huggingface/transformers/pull/39999 | 3,300,702,196 | PR_kwDOCUB6oc6imXiP | 39,999 | allow TP to work in ND-parallel with fsdp cpu ram efficient loading | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-07T14:14:48 | 2025-08-25T08:56:54 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39999",
"html_url": "https://github.com/huggingface/transformers/pull/39999",
"diff_url": "https://github.com/huggingface/transformers/pull/39999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39999.patch",
"merged_at": null
} | # What does this PR do?
For N-D parallelism, When using FSDP2+TP with cpu_ram_efficient_loading, we have to specify the device_map as "meta" for non-rank0 processes. Additionally, even though we already know what device it will ultimately end up on through the device_mesh, we don't want to change the device_map for it since we've already defined it as `meta` device.
@SunMarc @S1ro1 @ArthurZucker
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39999/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39998/comments | https://api.github.com/repos/huggingface/transformers/issues/39998/events | https://github.com/huggingface/transformers/pull/39998 | 3,300,690,487 | PR_kwDOCUB6oc6imU-O | 39,998 | Raising error when quantizing a quantized model | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T14:12:01 | 2025-08-07T20:37:27 | 2025-08-07T20:37:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39998",
"html_url": "https://github.com/huggingface/transformers/pull/39998",
"diff_url": "https://github.com/huggingface/transformers/pull/39998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39998.patch",
"merged_at": "2025-08-07T20:37:26"
} | # What does this PR do?
This pr raises error if we try to quantize a quantized model using a different quantization method | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39998/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39997/comments | https://api.github.com/repos/huggingface/transformers/issues/39997/events | https://github.com/huggingface/transformers/pull/39997 | 3,300,687,325 | PR_kwDOCUB6oc6imUTb | 39,997 | make sure position_ids are passed in for causal mask creation for gpt-oss | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-07T14:11:06 | 2025-08-12T14:34:29 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39997",
"html_url": "https://github.com/huggingface/transformers/pull/39997",
"diff_url": "https://github.com/huggingface/transformers/pull/39997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39997.patch",
"merged_at": null
} | # What does this PR do?
Packing won't work with gpt-oss since it doesn't respect the position ids. See https://github.com/huggingface/transformers/pull/39194
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
@ArthurZucker @Cyrilvallez
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39997/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39996/comments | https://api.github.com/repos/huggingface/transformers/issues/39996/events | https://github.com/huggingface/transformers/pull/39996 | 3,300,663,541 | PR_kwDOCUB6oc6imPPf | 39,996 | Tie weights recursively on all submodels | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T14:04:37 | 2025-08-08T14:03:19 | 2025-08-08T14:03:16 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39996",
"html_url": "https://github.com/huggingface/transformers/pull/39996",
"diff_url": "https://github.com/huggingface/transformers/pull/39996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39996.patch",
"merged_at": "2025-08-08T14:03:16"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39900. Current code calls custom `_tie_weights` recursively on all `modules`, but does not recursively ties the embeddings or the encoder/decoder parts
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39996/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39995/comments | https://api.github.com/repos/huggingface/transformers/issues/39995/events | https://github.com/huggingface/transformers/pull/39995 | 3,300,582,975 | PR_kwDOCUB6oc6il-E9 | 39,995 | Fix consistency | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T13:44:48 | 2025-08-07T13:57:52 | 2025-08-07T13:52:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39995",
"html_url": "https://github.com/huggingface/transformers/pull/39995",
"diff_url": "https://github.com/huggingface/transformers/pull/39995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39995.patch",
"merged_at": "2025-08-07T13:52:41"
} | # What does this PR do?
cc @qubvel for viz after your PR! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39995/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39994/comments | https://api.github.com/repos/huggingface/transformers/issues/39994/events | https://github.com/huggingface/transformers/pull/39994 | 3,300,575,657 | PR_kwDOCUB6oc6il8gg | 39,994 | chore: Add type hints to import_utils.py module | {
"login": "wirthual",
"id": 2640499,
"node_id": "MDQ6VXNlcjI2NDA0OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2640499?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wirthual",
"html_url": "https://github.com/wirthual",
"followers_url": "https://api.github.com/users/wirthual/followers",
"following_url": "https://api.github.com/users/wirthual/following{/other_user}",
"gists_url": "https://api.github.com/users/wirthual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wirthual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wirthual/subscriptions",
"organizations_url": "https://api.github.com/users/wirthual/orgs",
"repos_url": "https://api.github.com/users/wirthual/repos",
"events_url": "https://api.github.com/users/wirthual/events{/privacy}",
"received_events_url": "https://api.github.com/users/wirthual/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T13:42:35 | 2025-08-21T11:20:21 | 2025-08-21T11:20:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39994",
"html_url": "https://github.com/huggingface/transformers/pull/39994",
"diff_url": "https://github.com/huggingface/transformers/pull/39994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39994.patch",
"merged_at": null
} | # What does this PR do?
Add type hints to `import_utils.py`
Based on these [docs](https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-library-stubs-or-py-typed-marker), this change should avoid errors like:
```
infinity_emb/inference/loading_strategy.py:10: error: Skipping analyzing "transformers.utils.import_utils": module is installed, but missing library stubs or py.typed marker [import-untyped]
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
## Who can review?
@stevhliu
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39994/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39993/comments | https://api.github.com/repos/huggingface/transformers/issues/39993/events | https://github.com/huggingface/transformers/pull/39993 | 3,300,566,912 | PR_kwDOCUB6oc6il6m3 | 39,993 | Default to dequantize if cpu in device_map for mxfp4 | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-07T13:40:10 | 2025-08-12T14:48:54 | 2025-08-12T14:48:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39993",
"html_url": "https://github.com/huggingface/transformers/pull/39993",
"diff_url": "https://github.com/huggingface/transformers/pull/39993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39993.patch",
"merged_at": "2025-08-12T14:48:52"
} | # What does this PR do?
For mxfp4 gptoss model, if no cuda device is available and the model is prequantized we default to dequantizing it after raising a warning so that it can run on cpu
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39993/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39992/comments | https://api.github.com/repos/huggingface/transformers/issues/39992/events | https://github.com/huggingface/transformers/issues/39992 | 3,300,511,232 | I_kwDOCUB6oc7Euc4A | 39,992 | [gpt-oss] Transform checkpoint from safetensors to state dict | {
"login": "fingertap",
"id": 7274689,
"node_id": "MDQ6VXNlcjcyNzQ2ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7274689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fingertap",
"html_url": "https://github.com/fingertap",
"followers_url": "https://api.github.com/users/fingertap/followers",
"following_url": "https://api.github.com/users/fingertap/following{/other_user}",
"gists_url": "https://api.github.com/users/fingertap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fingertap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fingertap/subscriptions",
"organizations_url": "https://api.github.com/users/fingertap/orgs",
"repos_url": "https://api.github.com/users/fingertap/repos",
"events_url": "https://api.github.com/users/fingertap/events{/privacy}",
"received_events_url": "https://api.github.com/users/fingertap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T13:24:06 | 2025-09-15T08:02:55 | 2025-09-15T08:02:55 | NONE | null | null | null | null | Yesterday I was working on gpt-oss. However, loading the weights give me troubles.
For models like Qwen, I did things like this:
1. Create model on meta device
2. FSDP2 shard it, so it can fit in memory
3. On each GPU, it read weights from safetensors in a generator style, to save memory.
4. Chunk the weights and copy to the FSDP’s DTensor.
GPT-oss does not apply this routine. Within `from_pretrained`, the mxfp4 quantizer somehow dequantized the weights, yet I cannot find a very clean way to utilize this capability. I have to modify the process, and initialized a CPU version of the model in the CPU memory.
How can we transform the safetensors to state dict directly? | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39992/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39991/comments | https://api.github.com/repos/huggingface/transformers/issues/39991/events | https://github.com/huggingface/transformers/issues/39991 | 3,300,416,534 | I_kwDOCUB6oc7EuFwW | 39,991 | Gemma3n get_placeholder_mask issue | {
"login": "Znerual",
"id": 22452386,
"node_id": "MDQ6VXNlcjIyNDUyMzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22452386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Znerual",
"html_url": "https://github.com/Znerual",
"followers_url": "https://api.github.com/users/Znerual/followers",
"following_url": "https://api.github.com/users/Znerual/following{/other_user}",
"gists_url": "https://api.github.com/users/Znerual/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Znerual/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Znerual/subscriptions",
"organizations_url": "https://api.github.com/users/Znerual/orgs",
"repos_url": "https://api.github.com/users/Znerual/repos",
"events_url": "https://api.github.com/users/Znerual/events{/privacy}",
"received_events_url": "https://api.github.com/users/Znerual/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T12:58:50 | 2025-08-08T10:45:48 | 2025-08-08T10:45:48 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Platform: Linux-6.8.0-71-generic-x86_64-with-glibc2.39
- Python version: 3.12.2
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.6.1
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
### Who can help?
@qubvel @eustlb
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running the Gemma3n model on audio data using the description from huggingface:
```python
import io
import librosa
import numpy as np
import requests
import torch
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
processor = AutoProcessor.from_pretrained("google/gemma-3n-E4B-it")
model = Gemma3nForConditionalGeneration.from_pretrained(
"google/gemma-3n-E4B-it",
torch_dtype="auto",
device_map="auto"
).eval()
# 2. Download and Process the Problematic Audio File
audio_url = "https://styletts.github.io/wavs/styletts/abstract.mp3"
print(f"Downloading audio from: {audio_url}")
response = requests.get(audio_url)
response.raise_for_status()
audio_bytes = response.content
print("Processing audio to 16kHz mono float32...")
try:
audio_data, _ = librosa.load(
io.BytesIO(audio_bytes),
sr=16000,
mono=True,
res_type='scipy'
)
processed_audio = audio_data.astype(np.float32)
print("Audio processing complete.")
except Exception as e:
print(f"Failed to process audio: {e}")
exit()
content = [
{"type": "text", "text": "Some text prompt"},
{"type": "audio", "audio": audio_data}
]
messages = [
{"role": "system", "content": [{"type": "text", "text" : "You are a helpful assistant."}]},
{"role": "user", "content": content}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=model.dtype)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
outputs = outputs[0][input_len:]
response_text = processor.decode(outputs, skip_special_tokens=True)
final_response = response_text.split("model\n")[-1].strip()
```
results in the error:
```
Traceback (most recent call last):
File "***/gemma-api/bug.py", line 51, in <module>
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/generation/utils.py", line 2634, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/***gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/generation/utils.py", line 3615, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/utils/generic.py", line 959, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/models/gemma3n/modeling_gemma3n.py", line 2283, in forward
outputs = self.model(
^^^^^^^^^^^
File "/***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/utils/generic.py", line 959, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "***/gemma-api-r8Np3HKg-py3.12/lib/python3.12/site-packages/transformers/models/gemma3n/modeling_gemma3n.py", line 2117, in forward
_, special_audio_mask = self.get_placeholder_mask(
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Gemma3nModel.get_placeholder_mask() missing 1 required positional argument: 'image_features'
```
and it can be fixed by modifying the `modeling_gemma3n.py` file in lines 1966 to 1969, by making the arguments optional:
OLD
```python
def get_placeholder_mask(
self,
input_ids: torch.LongTensor,
inputs_embeds: torch.FloatTensor,
image_features: torch.FloatTensor,
audio_features: torch.FloatTensor,
):
```
FIXED
```python
def get_placeholder_mask(
self,
input_ids: torch.LongTensor | None = None,
inputs_embeds: torch.FloatTensor | None = None,
image_features: torch.FloatTensor | None = None,
audio_features: torch.FloatTensor | None = None,
):
```
### Expected behavior
Running without an error | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39991/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39991/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39990/comments | https://api.github.com/repos/huggingface/transformers/issues/39990/events | https://github.com/huggingface/transformers/pull/39990 | 3,300,322,108 | PR_kwDOCUB6oc6ilFd5 | 39,990 | Update expected output values after #39885 (part 1) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T12:31:42 | 2025-08-07T14:00:30 | 2025-08-07T14:00:29 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39990",
"html_url": "https://github.com/huggingface/transformers/pull/39990",
"diff_url": "https://github.com/huggingface/transformers/pull/39990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39990.patch",
"merged_at": "2025-08-07T14:00:28"
} | # What does this PR do?
The changes are expected. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39990/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39989/comments | https://api.github.com/repos/huggingface/transformers/issues/39989/events | https://github.com/huggingface/transformers/pull/39989 | 3,300,320,571 | PR_kwDOCUB6oc6ilFIQ | 39,989 | Higgs modules_to_not_convert standardization | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T12:31:10 | 2025-08-08T08:23:01 | 2025-08-08T08:22:59 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39989",
"html_url": "https://github.com/huggingface/transformers/pull/39989",
"diff_url": "https://github.com/huggingface/transformers/pull/39989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39989.patch",
"merged_at": "2025-08-08T08:22:59"
} | # What does this PR do?
Standardizes the way higgs quantization uses the `module_to_not_convert` attribute | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39989/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39988/comments | https://api.github.com/repos/huggingface/transformers/issues/39988/events | https://github.com/huggingface/transformers/pull/39988 | 3,300,266,198 | PR_kwDOCUB6oc6ik5QI | 39,988 | Update Glm4V processor and add tests | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T12:14:49 | 2025-08-12T11:40:55 | 2025-08-12T11:40:55 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39988",
"html_url": "https://github.com/huggingface/transformers/pull/39988",
"diff_url": "https://github.com/huggingface/transformers/pull/39988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39988.patch",
"merged_at": "2025-08-12T11:40:55"
} | # What does this PR do?
As per title, the processor has no tests and currently user-defined `size` isn't being used when processing images/videos | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39988/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39987/comments | https://api.github.com/repos/huggingface/transformers/issues/39987/events | https://github.com/huggingface/transformers/pull/39987 | 3,300,227,743 | PR_kwDOCUB6oc6ikwvz | 39,987 | Add a VGGT(Visual Geometry Grounded Transformer) model compatible with huggingface transfromers | {
"login": "panzhizhen111",
"id": 176186831,
"node_id": "U_kgDOCoBlzw",
"avatar_url": "https://avatars.githubusercontent.com/u/176186831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panzhizhen111",
"html_url": "https://github.com/panzhizhen111",
"followers_url": "https://api.github.com/users/panzhizhen111/followers",
"following_url": "https://api.github.com/users/panzhizhen111/following{/other_user}",
"gists_url": "https://api.github.com/users/panzhizhen111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panzhizhen111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panzhizhen111/subscriptions",
"organizations_url": "https://api.github.com/users/panzhizhen111/orgs",
"repos_url": "https://api.github.com/users/panzhizhen111/repos",
"events_url": "https://api.github.com/users/panzhizhen111/events{/privacy}",
"received_events_url": "https://api.github.com/users/panzhizhen111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-07T12:03:24 | 2025-08-11T02:32:58 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39987",
"html_url": "https://github.com/huggingface/transformers/pull/39987",
"diff_url": "https://github.com/huggingface/transformers/pull/39987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39987.patch",
"merged_at": null
} | This PR adds the VGGT (Visual Geometry Grounded Transformer) model to the Hugging Face Transformers library.
It includes:
- `VggtConfig`, `VggtModel`
- Integration into the Auto classes (`AutoConfig`, `AutoModel`).
- Basic unit tests for configuration, save/load, and forward pass.
- [WIP]Model documentation in `docs/source/en/model_doc/vggt.mdx`.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "panzhizhen111",
"id": 176186831,
"node_id": "U_kgDOCoBlzw",
"avatar_url": "https://avatars.githubusercontent.com/u/176186831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/panzhizhen111",
"html_url": "https://github.com/panzhizhen111",
"followers_url": "https://api.github.com/users/panzhizhen111/followers",
"following_url": "https://api.github.com/users/panzhizhen111/following{/other_user}",
"gists_url": "https://api.github.com/users/panzhizhen111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/panzhizhen111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/panzhizhen111/subscriptions",
"organizations_url": "https://api.github.com/users/panzhizhen111/orgs",
"repos_url": "https://api.github.com/users/panzhizhen111/repos",
"events_url": "https://api.github.com/users/panzhizhen111/events{/privacy}",
"received_events_url": "https://api.github.com/users/panzhizhen111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39987/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39986/comments | https://api.github.com/repos/huggingface/transformers/issues/39986/events | https://github.com/huggingface/transformers/pull/39986 | 3,300,006,647 | PR_kwDOCUB6oc6ij_ll | 39,986 | fix: resolve triton version check compatibility on windows | {
"login": "Tsumugii24",
"id": 124921491,
"node_id": "U_kgDOB3Imkw",
"avatar_url": "https://avatars.githubusercontent.com/u/124921491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tsumugii24",
"html_url": "https://github.com/Tsumugii24",
"followers_url": "https://api.github.com/users/Tsumugii24/followers",
"following_url": "https://api.github.com/users/Tsumugii24/following{/other_user}",
"gists_url": "https://api.github.com/users/Tsumugii24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tsumugii24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tsumugii24/subscriptions",
"organizations_url": "https://api.github.com/users/Tsumugii24/orgs",
"repos_url": "https://api.github.com/users/Tsumugii24/repos",
"events_url": "https://api.github.com/users/Tsumugii24/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tsumugii24/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-07T10:56:21 | 2025-08-11T06:53:47 | 2025-08-11T06:53:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39986",
"html_url": "https://github.com/huggingface/transformers/pull/39986",
"diff_url": "https://github.com/huggingface/transformers/pull/39986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39986.patch",
"merged_at": "2025-08-11T06:53:20"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes https://github.com/huggingface/transformers/issues/39985
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
## Summary
This PR adds support for detecting `triton-windows` in the version-check logic inside [`transformers/utils/import_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py).
Currently, when `triton-windows` is installed on Windows, the following check fails:
```python
importlib.metadata.version("pytorch-triton")
```
As a result, MXFP4 quantization in models like [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) falls back to `bf16` unnecessarily, leading to significant inference time delay due to CPU offloading.
This PR is compatible with Triton on both Windows and Linux, and also remain checking `pytorch-triton` when `triton` is not found, enabling MXFP4 inference on all devices.
## Test
### test code
Here time count is introduced to show the difference before and after bug fix. The inference speed of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) increased by 16 times on a NVIDIA GeForce RTX 5090 GPU.
```python
from transformers import pipeline
import torch
import time
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
time_start = time.time()
outputs = pipe(
messages,
max_new_tokens=1024,
)
time_end = time.time()
print(outputs[0]["generated_text"][-1])
print(f"Time Cost: {time_end - time_start} seconds")
```
### Before
On Windows with `triton-windows` >= 3.4.0 installed:
MXFP4 fallback warning appears and Inference is very slow due to CPU offload:
```
MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed, we will default to dequantizing the model to bf16
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.19s/it]
Some parameters are on the meta device because they were offloaded to the cpu.
Device set to use cuda:0
{'role': 'assistant', 'content': 'analysisThe user asks: "Explain quantum mechanics clearly and concisely." They likely want a concise but clear explanation. They want a balanced explanation: basics, key concepts, maybe mention wave-particle duality, superposition, uncertainty, measurement, wavefunction, etc. They want clear and concise. Probably a few paragraphs. They might appreciate analogies. So respond with a concise explanation, maybe bullet points, short sentences. Should avoid jargon. Provide an accessible overview. Probably mention that quantum mechanics is the physics of microscopic scales, describes particles as waves, probability, superposition, entanglement, etc. Also mention that classical mechanics emerges as approximation at larger scales. Keep it clear. Also mention that mathematics: Schrödinger equation, operators. Maybe mention the significance: quantum computing, electronics. So produce a concise explanation. Use simple language. I\'ll produce 3-4 paragraphs or bullet points. Ensure clarity.assistantfinal**Quantum mechanics in a nutshell**\n\n| Core idea | What it means for “tiny” objects |\n|-----------|----------------------------------|\n| **Wave‑particle duality** | Electrons, photons, atoms behave sometimes like tiny balls, sometimes like waves that interfere. |\n| **Wavefunction (ψ)** | A mathematical “wave” that tells you the probability of finding the particle where you look. |\n| **Superposition** | Until you measure it, a quantum system can be in many states at once (e.g., a cat that’s both alive and dead in Schrödinger’s thought experiment). |\n| **Uncertainty principle** | You can’t know position *and* momentum exactly at the same time; the more precisely you know one, the fuzzier the other becomes. |\n| **Measurement collapses the wavefunction** | When you look, the spread of possibilities “collapses” to a single result. |\n| **Entanglement** | Two particles can share a single wavefunction, so measuring one instantly tells you about the other, no matter how far apart. |\n\n**Why it matters**\n\n- **Microscopic physics**: Explains how atoms bond, how semiconductors work, why the Sun shines. \n- **Technology**: Quantum bits (qubits) in quantum computers, quantum cryptography, MRI imaging. \n- **Emergence of classical physics**: For large objects the many tiny quantum effects average out, giving us the smooth, predictable world of everyday mechanics.\n\n**Basic picture**\n\n1. **Prepare** a quantum system (e.g., put an electron in a known energy state). \n2. **Let it evolve** according to the Schrödinger equation – the wavefunction spreads, interferes, and changes in time. \n3. **Measure** – you get a definite outcome, and the wavefunction “collapses” to that result. \n4. **Repeat** – the statistical pattern of many measurements matches the probabilities predicted by the wavefunction.\n\nThat is quantum mechanics: a probabilistic, wave‑based theory that replaces the deterministic, Newtonian view for the very small.'}
Time Cost: 471.1367383003235 seconds
```
### After
On Windows with `triton-windows` >= 3.4.0 installed:
- [x] Version is correctly detected with no warnings
- [x] MXFP4 quantization is enabled
- [x] Inference runs efficiently on GPU
```
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████| 3/3 [00:04<00:00, 1.56s/it]
Device set to use cuda:0
{'role': 'assistant', 'content': 'analysisThe user asks: "Explain quantum mechanics clearly and concisely." They want a concise explanation. We should explain key concepts: wave-particle duality, superposition, measurement, entanglement, Heisenberg uncertainty, Schrödinger equation, probabilities, collapse, quantum states, operators. Should be clear, accessible. Probably 200-300 words. Should avoid jargon. Provide intuitive analogies. Use bullet points maybe. Let\'s produce a concise explanation.assistantfinal**Quantum mechanics in a nutshell**\n\n| What it studies | How it works | Why it matters |\n|-----------------|--------------|----------------|\n| **The microscopic world** – atoms, electrons, photons, nuclei | **Probabilities, not certainties** – a particle is described by a *wavefunction* that gives the chance of finding it anywhere. | Explains all of chemistry, electronics, lasers, MRI, and the behavior of the most fundamental building blocks of the universe. |\n\n### Core ideas\n\n1. **Wave–particle duality** \n *Light and matter act both like particles (they have mass or energy) and like waves (they interfere and diffract).* \n *Analogy:* A drop of ink in water spreads out like a wave but still travels as a lump.\n\n2. **Quantum state and superposition** \n A system can be in a *superposition* of many possible states at once. \n *Example:* An electron in an atom can be in a mix of “up” and “down” spin until measured.\n\n3. **Measurement collapses the wavefunction** \n When we observe a property, the superposition instantaneously “collapses” to a single outcome. \n *Result:* We get a definite number (e.g., the electron is found in one particular location), but the exact outcome was only probabilistic beforehand.\n\n4. **Entanglement** \n Two or more particles can become linked so that the state of one instantly determines the state of the other, no matter how far apart. \n *Implication:* Correlations that defy classical explanations.\n\n5. **Uncertainty principle** \n Certain pairs of properties (position & momentum, energy & time) cannot both be known exactly at the same time. \n *Consequence:* The world is fundamentally fuzzy at small scales.\n\n6. **Schrödinger equation** \n Governs how a wavefunction evolves over time: \n \\[\n i\\hbar\\,\\frac{\\partial\\psi}{\\partial t} = \\hat H \\psi\n \\] \n *What it does:* Predicts probabilities for all possible outcomes.\n\n### Why the weirdness works\n\n- **Probability amplitudes** (complex numbers) interfere, producing the characteristic patterns of double‑slit experiments. \n- **Operators** represent measurable quantities; their eigenvalues are the possible results. \n- **Observables** are linked to physical experiments, not to hidden “true” values.\n\nIn practice, quantum mechanics tells us how to design semiconductors, explain chemical bonds, predict the spectrum of atoms, and even build quantum computers that exploit superposition and entanglement for computation far beyond classical machines. The theory is mathematically precise, experimentally verified to extraordinary accuracy, and the only framework we have for describing the behavior of matter and energy at the smallest scales.'}
Time Cost: 29.460687398910522 seconds
```
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39986/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39985/comments | https://api.github.com/repos/huggingface/transformers/issues/39985/events | https://github.com/huggingface/transformers/issues/39985 | 3,299,994,871 | I_kwDOCUB6oc7Esez3 | 39,985 | Triton version check compatibility on windows | {
"login": "Tsumugii24",
"id": 124921491,
"node_id": "U_kgDOB3Imkw",
"avatar_url": "https://avatars.githubusercontent.com/u/124921491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tsumugii24",
"html_url": "https://github.com/Tsumugii24",
"followers_url": "https://api.github.com/users/Tsumugii24/followers",
"following_url": "https://api.github.com/users/Tsumugii24/following{/other_user}",
"gists_url": "https://api.github.com/users/Tsumugii24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tsumugii24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tsumugii24/subscriptions",
"organizations_url": "https://api.github.com/users/Tsumugii24/orgs",
"repos_url": "https://api.github.com/users/Tsumugii24/repos",
"events_url": "https://api.github.com/users/Tsumugii24/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tsumugii24/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T10:52:38 | 2025-08-11T06:53:21 | 2025-08-11T06:53:21 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.12.11
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.6.1
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu129 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 5090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Introduction
The bug is discovered as I am trying to deploy MXFP4 quantization of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on Windows, utilizing a NVIDIA GeForce RTX 5090 GPU. More system Information can be seen above.
## Description
When running the official code snippet provided in [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) that use `transformers` library. The following issue occurs on Windows:
```
MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed, we will default to dequantizing the model to bf16
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████| 3/3 [00:10<00:00, 3.55s/it]
Some parameters are on the meta device because they were offloaded to the cpu.
Device set to use cuda:0
```
The response can be generated in the end, but it takes a long time because of cpu offloading.
## Reproduction
### code
code is officially provided by openai on [huggingface](https://huggingface.co/openai/gpt-oss-20b)
```
# inference.py
from transformers import pipeline
import torch
model_id = "openai/gpt-oss-20b"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### environment
```
Python 3.12.11
```
```
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 576.80 Driver Version: 576.80 CUDA Version: 12.9 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 5090 WDDM | 00000000:01:00.0 Off | N/A |
| 0% 39C P8 16W / 600W | 0MiB / 32607MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
```
```
# Name Version Build Channel
accelerate 1.9.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aiohappyeyeballs 2.6.1 pypi_0 pypi
aiohttp 3.12.15 pypi_0 pypi
aiosignal 1.4.0 pypi_0 pypi
attrs 25.3.0 pypi_0 pypi
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2025.2.25 haa95532_0
certifi 2025.8.3 pypi_0 pypi
charset-normalizer 3.4.2 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
datasets 4.0.0 pypi_0 pypi
dill 0.3.8 pypi_0 pypi
expat 2.7.1 h8ddb27b_0
filelock 3.13.1 pypi_0 pypi
frozenlist 1.7.0 pypi_0 pypi
fsspec 2024.6.1 pypi_0 pypi
huggingface-hub 0.34.3 pypi_0 pypi
idna 3.10 pypi_0 pypi
iniconfig 2.1.0 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
kernels 0.9.0 pypi_0 pypi
libffi 3.4.4 hd77b12b_1
markupsafe 2.1.5 pypi_0 pypi
modelscope 1.28.2 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
multidict 6.6.3 pypi_0 pypi
multiprocess 0.70.16 pypi_0 pypi
networkx 3.3 pypi_0 pypi
numpy 2.3.2 pypi_0 pypi
openssl 3.0.17 h35632f6_0
packaging 25.0 pypi_0 pypi
pandas 2.3.1 pypi_0 pypi
pillow 11.3.0 pypi_0 pypi
pip 25.1 pyhc872135_2
pluggy 1.6.0 pypi_0 pypi
propcache 0.3.2 pypi_0 pypi
psutil 7.0.0 pypi_0 pypi
pyarrow 21.0.0 pypi_0 pypi
pygments 2.19.2 pypi_0 pypi
pytest 8.4.1 pypi_0 pypi
python 3.12.11 h716150d_0
python-dateutil 2.9.0.post0 pypi_0 pypi
pytz 2025.2 pypi_0 pypi
pyyaml 6.0.2 pypi_0 pypi
regex 2025.7.34 pypi_0 pypi
requests 2.32.4 pypi_0 pypi
safetensors 0.6.1 pypi_0 pypi
setuptools 78.1.1 py312haa95532_0
six 1.17.0 pypi_0 pypi
sqlite 3.50.2 hda9a48d_1
sympy 1.13.3 pypi_0 pypi
tk 8.6.14 h5e9d12e_1
tokenizers 0.21.4 pypi_0 pypi
torch 2.8.0+cu129 pypi_0 pypi
tqdm 4.67.1 pypi_0 pypi
transformers 4.55.0 pypi_0 pypi
triton-kernels 1.0.0 pypi_0 pypi
triton-windows 3.4.0.post20 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tzdata 2025.2 pypi_0 pypi
ucrt 10.0.22621.0 haa95532_0
urllib3 2.5.0 pypi_0 pypi
vc 14.3 h2df5915_10
vc14_runtime 14.44.35208 h4927774_10
vs2015_runtime 14.44.35208 ha6b5a95_10
wheel 0.45.1 py312haa95532_0
xxhash 3.5.0 pypi_0 pypi
xz 5.6.4 h4754444_1
yarl 1.20.1 pypi_0 pypi
zlib 1.2.13 h8cc25b3_1
```
### run
```
python inference.py
```
## Pinpoint
As official repo of [triton](https://github.com/triton-lang/triton) does not support windows installation, for windows users we use a forked version of triton — [triton-windows](https://github.com/woct0rdho/triton-windows) that is compiled and maintained by @woct0rdho . And triton-windows is installed by the following command described in [triton-windows-releases](https://github.com/woct0rdho/triton-windows/releases):
```
pip install -U "triton-windows<3.5"
```
The only difference of `triton-windows` is that it cannot be directly grepped by `pip show triton` (should use `pip show triton-windows` instead). However all the usage and import is the same as `triton`. The test script and result of `triton-windows` is illustrated below:
```
# test.py
import torch
import triton
import triton.language as tl
import importlib.metadata
# triton language test
@triton.jit
def add_kernel(x_ptr, y_ptr, output_ptr, n_elements, BLOCK_SIZE: tl.constexpr):
pid = tl.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
mask = offsets < n_elements
x = tl.load(x_ptr + offsets, mask=mask)
y = tl.load(y_ptr + offsets, mask=mask)
output = x + y
tl.store(output_ptr + offsets, output, mask=mask)
def add(x: torch.Tensor, y: torch.Tensor):
output = torch.empty_like(x)
n_elements = output.numel()
grid = lambda meta: (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
add_kernel[grid](x, y, output, n_elements, BLOCK_SIZE=1024)
return output
a = torch.rand(3, device="cuda")
b = a + a
b_compiled = add(a, a)
print(b_compiled - b)
print("If you see tensor([0., 0., 0.], device='cuda:0'), then it works")
# triton version
package = importlib.import_module("triton")
package_version = getattr(package, "__version__", "N/A")
print(package_version)
# linux
# print(importlib.metadata.version("triton"))
# windows
print(importlib.metadata.version("triton-windows"))
```
```
python test.py
tensor([0., 0., 0.], device='cuda:0')
If you see tensor([0., 0., 0.], device='cuda:0'), then it works
3.4.0
3.4.0.post20
```
The issue lies in this block of transformers source code:
https://github.com/huggingface/transformers/blob/555cbf59178134d1b713a7022129aaddfe6e70cf/src/transformers/utils/import_utils.py#L45-L91
The version check of triton is not compatible with windows because of:
https://github.com/huggingface/transformers/blob/555cbf59178134d1b713a7022129aaddfe6e70cf/src/transformers/utils/import_utils.py#L79-L83
## Conclusion
Transformers fails to detect triton on windows and automatically dequantizes to bf16, leading to CPU offloading and very slow inference — despite having `triton-windows >= 3.4.0` properly installed.
The bug is caused by incompatibility of triton version check on windows, however with appropriate checking logic, `triton-windows` can be detected and models like [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) can be loaded in MXFP4 quantization smoothly.
### Expected behavior
MXFP4 quantization should work as intended on Windows if `triton-windows` is installed and functioning, instead of defaulting to bf16. This PR https://github.com/huggingface/transformers/pull/39986 may resolve this bug. | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39985/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39985/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39984/comments | https://api.github.com/repos/huggingface/transformers/issues/39984/events | https://github.com/huggingface/transformers/pull/39984 | 3,299,877,630 | PR_kwDOCUB6oc6ijjbL | 39,984 | Fix setting attention for multimodal models | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T10:13:10 | 2025-08-19T09:35:12 | 2025-08-19T09:35:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39984",
"html_url": "https://github.com/huggingface/transformers/pull/39984",
"diff_url": "https://github.com/huggingface/transformers/pull/39984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39984.patch",
"merged_at": "2025-08-19T09:35:12"
} | # What does this PR do?
Fixes setting attention implementation in multimodals as a dict. Currently it fails because `self._attn_implementation` is not defined at the point when we try to `get` it. We need to set attention to `None` if the key is not found in dict, which is the default attention
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39984/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39983/comments | https://api.github.com/repos/huggingface/transformers/issues/39983/events | https://github.com/huggingface/transformers/issues/39983 | 3,299,842,712 | I_kwDOCUB6oc7Er5qY | 39,983 | CVE fix for v4.37.2 and v4.38.0 | {
"login": "Aman-Surkar",
"id": 99606590,
"node_id": "U_kgDOBe_gPg",
"avatar_url": "https://avatars.githubusercontent.com/u/99606590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aman-Surkar",
"html_url": "https://github.com/Aman-Surkar",
"followers_url": "https://api.github.com/users/Aman-Surkar/followers",
"following_url": "https://api.github.com/users/Aman-Surkar/following{/other_user}",
"gists_url": "https://api.github.com/users/Aman-Surkar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aman-Surkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aman-Surkar/subscriptions",
"organizations_url": "https://api.github.com/users/Aman-Surkar/orgs",
"repos_url": "https://api.github.com/users/Aman-Surkar/repos",
"events_url": "https://api.github.com/users/Aman-Surkar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aman-Surkar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T10:02:44 | 2025-09-15T08:02:57 | 2025-09-15T08:02:57 | NONE | null | null | null | null | Hi Team,
I wanted to have CVE (https://github.com/advisories/GHSA-jjph-296x-mrcr) fixed in v4.37.2 and v4.38.0 through backporting. I see that chat.py which was recently added, which has a mention in the fix for the CVE doesn't exists in v4.37.2 and v4.38.0. I wanted to know how I can fix it. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39983/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39982/comments | https://api.github.com/repos/huggingface/transformers/issues/39982/events | https://github.com/huggingface/transformers/issues/39982 | 3,299,827,227 | I_kwDOCUB6oc7Er14b | 39,982 | flash-attn cannot perform deterministic computation | {
"login": "Ju-si-yuan",
"id": 59277332,
"node_id": "MDQ6VXNlcjU5Mjc3MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59277332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ju-si-yuan",
"html_url": "https://github.com/Ju-si-yuan",
"followers_url": "https://api.github.com/users/Ju-si-yuan/followers",
"following_url": "https://api.github.com/users/Ju-si-yuan/following{/other_user}",
"gists_url": "https://api.github.com/users/Ju-si-yuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ju-si-yuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ju-si-yuan/subscriptions",
"organizations_url": "https://api.github.com/users/Ju-si-yuan/orgs",
"repos_url": "https://api.github.com/users/Ju-si-yuan/repos",
"events_url": "https://api.github.com/users/Ju-si-yuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ju-si-yuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T09:58:06 | 2025-08-08T10:38:50 | 2025-08-08T10:38:49 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-5.10.134-010.ali5000.al8.x86_64-x86_64-with-glibc2.39
- Python version: 3.10.18
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.6.1
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: 0.16.4
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H800
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Using any training framework to perform training through Transformers.
Here, I am using [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/v0.9.3 )
launch:
```bash
llamafactory-cli train examples/train_full/llama3_full_sft.yaml
```
Add the following content to llama3_full_sft.yaml:
full_determinism: True
flash_attn: fa2
### Expected behavior
After enabling deterministic computation, running twice produces inconsistent loss and gradients.
### Reason
Starting from version v2.4.1, flash-attn supports deterministic computation: https://github.com/Dao-AILab/flash-attention/pull/722.
By default, the backward pass is non-deterministic because it uses `atomicAdd`. To enable deterministic behavior, you must set `deterministic=True` when calling `flash_attn_qkvpacked_func`. Transformers has also added corresponding support: https://github.com/huggingface/transformers/pull/31961.
The value of deterministic is controlled by `deterministic_g`, as shown in the following code:
https://github.com/huggingface/transformers/blob/v4.51.2/src/transformers/modeling_flash_attention_utils.py#L276.
However, I believe there is an issue with this approach: `deterministic_g` is a global (module-level) variable that gets initialized as soon as the module is imported. This means that by the time `enable_full_determinism` is executed, `deterministic_g` has already been initialized to False. Therefore, the line at https://github.com/huggingface/transformers/blob/v4.51.2/src/transformers/trainer_utils.py#L79 does not affect the value of `deterministic_g`, resulting in it remaining permanently False. | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39982/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39981/comments | https://api.github.com/repos/huggingface/transformers/issues/39981/events | https://github.com/huggingface/transformers/pull/39981 | 3,299,613,845 | PR_kwDOCUB6oc6iiqKC | 39,981 | [Idefics] fix device mismatch | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-07T08:50:23 | 2025-08-07T15:45:04 | 2025-08-07T09:12:04 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39981",
"html_url": "https://github.com/huggingface/transformers/pull/39981",
"diff_url": "https://github.com/huggingface/transformers/pull/39981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39981.patch",
"merged_at": "2025-08-07T09:12:04"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39947. We need to use consistent device when computing `position_ids`, and since we are updating `position_ids` we will use its device instead of `pixel_values`. Otherwise it raises errors because those are located in two different devices | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39981/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39980/comments | https://api.github.com/repos/huggingface/transformers/issues/39980/events | https://github.com/huggingface/transformers/pull/39980 | 3,299,571,961 | PR_kwDOCUB6oc6iig5V | 39,980 | [DRAFT] optimize gpt-oss decoding | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T08:38:48 | 2025-08-07T08:43:10 | 2025-08-07T08:43:10 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39980",
"html_url": "https://github.com/huggingface/transformers/pull/39980",
"diff_url": "https://github.com/huggingface/transformers/pull/39980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39980.patch",
"merged_at": null
} | Optimize gpt-oss decoding because the hidden states only need to repeat local num experts times. | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39980/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39979/comments | https://api.github.com/repos/huggingface/transformers/issues/39979/events | https://github.com/huggingface/transformers/pull/39979 | 3,299,564,676 | PR_kwDOCUB6oc6iifTg | 39,979 | Fix cross-attention masking before residual connection | {
"login": "ArkVex",
"id": 159469387,
"node_id": "U_kgDOCYFPSw",
"avatar_url": "https://avatars.githubusercontent.com/u/159469387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArkVex",
"html_url": "https://github.com/ArkVex",
"followers_url": "https://api.github.com/users/ArkVex/followers",
"following_url": "https://api.github.com/users/ArkVex/following{/other_user}",
"gists_url": "https://api.github.com/users/ArkVex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArkVex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArkVex/subscriptions",
"organizations_url": "https://api.github.com/users/ArkVex/orgs",
"repos_url": "https://api.github.com/users/ArkVex/repos",
"events_url": "https://api.github.com/users/ArkVex/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArkVex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T08:36:23 | 2025-08-22T11:44:40 | 2025-08-22T11:44:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39979",
"html_url": "https://github.com/huggingface/transformers/pull/39979",
"diff_url": "https://github.com/huggingface/transformers/pull/39979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39979.patch",
"merged_at": null
} | This PR fixes an incorrect masking position in the MllamaCrossAttentionDecoderLayer. Previously, the full_text_row_masked_out_mask was applied after the cross-attention output was added to the residual connection. This allowed image tokens to leak into text tokens that should not have seen them.
The fix moves the masking to immediately after the cross-attention output is computed and before it’s added to the residual. This ensures correct separation between image and text features and aligns with the expected behavior described in the reference implementation.
Fixes #39379 | {
"login": "ArkVex",
"id": 159469387,
"node_id": "U_kgDOCYFPSw",
"avatar_url": "https://avatars.githubusercontent.com/u/159469387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArkVex",
"html_url": "https://github.com/ArkVex",
"followers_url": "https://api.github.com/users/ArkVex/followers",
"following_url": "https://api.github.com/users/ArkVex/following{/other_user}",
"gists_url": "https://api.github.com/users/ArkVex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArkVex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArkVex/subscriptions",
"organizations_url": "https://api.github.com/users/ArkVex/orgs",
"repos_url": "https://api.github.com/users/ArkVex/repos",
"events_url": "https://api.github.com/users/ArkVex/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArkVex/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39979/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39978/comments | https://api.github.com/repos/huggingface/transformers/issues/39978/events | https://github.com/huggingface/transformers/pull/39978 | 3,299,512,793 | PR_kwDOCUB6oc6iiT6U | 39,978 | Various test fixes for AMD | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T08:20:41 | 2025-08-07T12:00:12 | 2025-08-07T08:57:04 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39978",
"html_url": "https://github.com/huggingface/transformers/pull/39978",
"diff_url": "https://github.com/huggingface/transformers/pull/39978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39978.patch",
"merged_at": "2025-08-07T08:57:04"
} | This PR introduces minor test fixes for 4 different models:
- for `internvl` we add an Expectation for ROCm 9.4 and remove a tensor being created from a tensor;
- for `llama` we add an Expectation for ROCm 9.4;
- for `llava` we add the `@require_bitsandbytes` decorator for a test requires bitsandbytes;
- for `mistral3` we add an Expectation for ROCm 9.4; | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39978/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39977/comments | https://api.github.com/repos/huggingface/transformers/issues/39977/events | https://github.com/huggingface/transformers/issues/39977 | 3,299,372,993 | I_kwDOCUB6oc7EqG_B | 39,977 | FSDP2 not compatible with transformers >= 4.54.0 GenericForTokenClassification | {
"login": "ETOgaosion",
"id": 57280232,
"node_id": "MDQ6VXNlcjU3MjgwMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/57280232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ETOgaosion",
"html_url": "https://github.com/ETOgaosion",
"followers_url": "https://api.github.com/users/ETOgaosion/followers",
"following_url": "https://api.github.com/users/ETOgaosion/following{/other_user}",
"gists_url": "https://api.github.com/users/ETOgaosion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ETOgaosion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ETOgaosion/subscriptions",
"organizations_url": "https://api.github.com/users/ETOgaosion/orgs",
"repos_url": "https://api.github.com/users/ETOgaosion/repos",
"events_url": "https://api.github.com/users/ETOgaosion/events{/privacy}",
"received_events_url": "https://api.github.com/users/ETOgaosion/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T07:34:42 | 2025-08-15T10:28:17 | 2025-08-15T10:28:17 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.10.135.bsk.6-amd64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L20
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Python script `test_fsdp2.py`:
```py
import torch
from torch.distributed.fsdp import fully_shard
from transformers import AutoModelForTokenClassification
if not torch.distributed.is_initialized():
torch.distributed.init_process_group(
backend="nccl" if torch.cuda.is_available() else "gloo", init_method="env://"
)
rank = torch.distributed.get_rank()
if torch.cuda.is_available():
torch.cuda.set_device(rank)
def load_valuehead_model(path):
return AutoModelForTokenClassification.from_pretrained(path)
def apply_fsdp2(model, fsdp_kwargs):
"""model: AutoModelForCausalLM"""
default_transformer_cls_names_to_wrap = getattr(model, "_no_split_modules", None)
modules = []
for idx, module in enumerate(modules):
# if torch.distributed.is_initialized() and torch.distributed.get_rank() == 0:
# print(f"wrap module {module.__class__.__name__}")
fully_shard(module, **fsdp_kwargs)
# if torch.distributed.is_initialized() and torch.distributed.get_rank() == 0:
# print(f"wrap module {model.__class__.__name__}")
fully_shard(model, **fsdp_kwargs) # fsdp2 will not reshard_after_forward for root module
model = load_valuehead_model("Qwen/Qwen2-0.5B")
module = apply_fsdp2(model, {})
if torch.distributed.is_initialized():
torch.distributed.destroy_process_group()
```
Execute:
```sh
torchrun --standalone --nnodes=1 --nproc-per-node=2 test_fsdp2.py
```
Bug:
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspace/verl/test_fsdp2.py", line 37, in <module>
[rank1]: module = apply_fsdp2(model, {})
[rank1]: File "/workspace/verl/test_fsdp2.py", line 34, in apply_fsdp2
[rank1]: fully_shard(model, **fsdp_kwargs) # fsdp2 will not reshard_after_forward for root module
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/contract.py", line 150, in wrapper
[rank1]: updated = func(inp_module, *args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_fully_shard/_fully_shard.py", line 241, in fully_shard
[rank1]: module.__class__ = new_cls
[rank1]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in __setattr__
[rank1]: super().__setattr__(name, value)
[rank1]: TypeError: __class__ assignment: 'FSDPQwen2ForTokenClassification' object layout differs from 'Qwen2ForTokenClassification'
[rank0]: Traceback (most recent call last):
[rank0]: File "/workspace/verl/test_fsdp2.py", line 37, in <module>
[rank0]: module = apply_fsdp2(model, {})
[rank0]: File "/workspace/verl/test_fsdp2.py", line 34, in apply_fsdp2
[rank0]: fully_shard(model, **fsdp_kwargs) # fsdp2 will not reshard_after_forward for root module
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/_composable/contract.py", line 150, in wrapper
[rank0]: updated = func(inp_module, *args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/fsdp/_fully_shard/_fully_shard.py", line 241, in fully_shard
[rank0]: module.__class__ = new_cls
[rank0]: File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in __setattr__
[rank0]: super().__setattr__(name, value)
[rank0]: TypeError: __class__ assignment: 'FSDPQwen2ForTokenClassification' object layout differs from 'Qwen2ForTokenClassification'
```
### Expected behavior
Use transformers 4.53.2:
```
Successfully installed transformers-4.53.2
root@25ab4d5939b5:/workspace/verl# torchrun --standalone --nnodes=1 --nproc-per-node=2 test_fsdp2.py
W0807 07:34:18.264000 3600 torch/distributed/run.py:766]
W0807 07:34:18.264000 3600 torch/distributed/run.py:766] *****************************************
W0807 07:34:18.264000 3600 torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0807 07:34:18.264000 3600 torch/distributed/run.py:766] *****************************************
Some weights of Qwen2ForTokenClassification were not initialized from the model checkpoint at Qwen/Qwen2.5-0.5B and are newly initialized: ['score.bias', 'score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of Qwen2ForTokenClassification were not initialized from the model checkpoint at Qwen/Qwen2.5-0.5B and are newly initialized: ['score.bias', 'score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
``` | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39977/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39976/comments | https://api.github.com/repos/huggingface/transformers/issues/39976/events | https://github.com/huggingface/transformers/pull/39976 | 3,299,366,244 | PR_kwDOCUB6oc6ih1Bn | 39,976 | Fix Qwen3 MoE GGUF architecture mismatch | {
"login": "ctcanbol",
"id": 103742287,
"node_id": "U_kgDOBi77Tw",
"avatar_url": "https://avatars.githubusercontent.com/u/103742287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctcanbol",
"html_url": "https://github.com/ctcanbol",
"followers_url": "https://api.github.com/users/ctcanbol/followers",
"following_url": "https://api.github.com/users/ctcanbol/following{/other_user}",
"gists_url": "https://api.github.com/users/ctcanbol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctcanbol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctcanbol/subscriptions",
"organizations_url": "https://api.github.com/users/ctcanbol/orgs",
"repos_url": "https://api.github.com/users/ctcanbol/repos",
"events_url": "https://api.github.com/users/ctcanbol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctcanbol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T07:32:56 | 2025-08-12T13:39:20 | 2025-08-12T13:38:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39976",
"html_url": "https://github.com/huggingface/transformers/pull/39976",
"diff_url": "https://github.com/huggingface/transformers/pull/39976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39976.patch",
"merged_at": "2025-08-12T13:38:48"
} | # What does this PR do?
Currently, GGUF versions of Qwen3 MoE models raises "_ValueError: The checkpoint you are trying to load has model type qwen3moe but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date_" error. This PR resolves this issue.
Fixes https://github.com/huggingface/transformers/pull/39638
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @SunMarc @MekkCyber
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39976/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39975/comments | https://api.github.com/repos/huggingface/transformers/issues/39975/events | https://github.com/huggingface/transformers/pull/39975 | 3,299,077,941 | PR_kwDOCUB6oc6ig2sY | 39,975 | [bugfix] Fix tensor device in Idefics2, Idefics3, and SmolVLM | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-07T05:48:01 | 2025-08-13T07:58:53 | 2025-08-13T07:58:51 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39975",
"html_url": "https://github.com/huggingface/transformers/pull/39975",
"diff_url": "https://github.com/huggingface/transformers/pull/39975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39975.patch",
"merged_at": "2025-08-13T07:58:51"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39975/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39974/comments | https://api.github.com/repos/huggingface/transformers/issues/39974/events | https://github.com/huggingface/transformers/issues/39974 | 3,298,661,255 | I_kwDOCUB6oc7EnZOH | 39,974 | bug in new transformers: 'Florence2ForConditionalGeneration' object has no attribute '_supports_sdpa' | {
"login": "pseudotensor",
"id": 2249614,
"node_id": "MDQ6VXNlcjIyNDk2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pseudotensor",
"html_url": "https://github.com/pseudotensor",
"followers_url": "https://api.github.com/users/pseudotensor/followers",
"following_url": "https://api.github.com/users/pseudotensor/following{/other_user}",
"gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions",
"organizations_url": "https://api.github.com/users/pseudotensor/orgs",
"repos_url": "https://api.github.com/users/pseudotensor/repos",
"events_url": "https://api.github.com/users/pseudotensor/events{/privacy}",
"received_events_url": "https://api.github.com/users/pseudotensor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-07T02:27:18 | 2025-09-07T22:19:49 | 2025-09-01T11:40:40 | NONE | null | null | null | null | ### System Info
transformers 4.55.0
python 3.10
ubuntu 22
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
#!/usr/bin/env python3
"""
Minimal reproduction script for Florence-2 AttributeError: '_supports_sdpa'
"""
import torch
import sys
def main():
print("Loading Florence-2 model...")
print(f"PyTorch version: {torch.__version__}")
print(f"Transformers version: {__import__('transformers').__version__}")
print(f"Python version: {sys.version}")
try:
from transformers import AutoConfig
# Load config
config = AutoConfig.from_pretrained(
"microsoft/Florence-2-base",
trust_remote_code=True
)
print(f"Config loaded: {type(config)}")
# Use AutoModelForCausalLM to trigger the error
from transformers import AutoModelForCausalLM
# This should trigger the _supports_sdpa error during model initialization
print("Loading model with AutoModelForCausalLM...")
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Florence-2-base",
config=config,
trust_remote_code=True
)
print("Success!")
except AttributeError as e:
if "_supports_sdpa" in str(e):
print(f"SUCCESS: Reproduced the _supports_sdpa error!")
print(f"AttributeError: {e}")
import traceback
traceback.print_exc()
return True
else:
print(f"Different AttributeError: {e}")
import traceback
traceback.print_exc()
return False
except Exception as e:
print(f"Other error: {e}")
import traceback
traceback.print_exc()
return False
if __name__ == "__main__":
success = main()
if success:
print("\n? Successfully reproduced the _supports_sdpa error!")
else:
print("\n? Could not reproduce the _supports_sdpa error")(h2ogpt)
```
gives:
```
Loading Florence-2 model...
PyTorch version: 2.6.0+cu124
Transformers version: 4.55.0
Python version: 3.10.18 | packaged by conda-forge | (main, Jun 4 2025, 14:45:41) [GCC 13.3.0]
Config loaded: <class 'transformers_modules.microsoft.Florence-2-base.5ca5edf5bd017b9919c05d08aebef5e4c7ac3bac.configuration_florence2.Florence2Config'>
Loading model with AutoModelForCausalLM...
SUCCESS: Reproduced the _supports_sdpa error!
AttributeError: 'Florence2ForConditionalGeneration' object has no attribute '_supports_sdpa'
Traceback (most recent call last):
File "/home/jon/h2ogpt_internal/florence2_minimal_repro.py", line 30, in main
model = AutoModelForCausalLM.from_pretrained(
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 593, in from_pretrained
return model_class.from_pretrained(
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 316, in _wrapper
return func(*args, **kwargs)
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4986, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/jon/.cache/huggingface/modules/transformers_modules/microsoft/Florence-2-base/5ca5edf5bd017b9919c05d08aebef5e4c7ac3bac/modeling_florence2.py", line 2535, in __init__
super().__init__(config)
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2227, in __init__
self.config._attn_implementation_internal = self._check_and_adjust_attn_implementation(
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2758, in _check_and_adjust_attn_implementation
return self.get_correct_attn_implementation(applicable_attn_implementation, is_init_check)
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2763, in get_correct_attn_implementation
if not self._supports_sdpa:
File "/home/jon/miniconda3/envs/h2ogpt/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: 'Florence2ForConditionalGeneration' object has no attribute '_supports_sdpa'
? Successfully reproduced the _supports_sdpa error!
```
### Expected behavior
Shouldn't fail, unsure which transformers it started on. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39974/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39974/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39973/comments | https://api.github.com/repos/huggingface/transformers/issues/39973/events | https://github.com/huggingface/transformers/pull/39973 | 3,298,487,106 | PR_kwDOCUB6oc6ie3Zw | 39,973 | Causal loss for `ForConditionalGeneration` | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-07T00:36:30 | 2025-08-12T12:03:11 | 2025-08-12T12:03:10 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39973",
"html_url": "https://github.com/huggingface/transformers/pull/39973",
"diff_url": "https://github.com/huggingface/transformers/pull/39973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39973.patch",
"merged_at": "2025-08-12T12:03:10"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39973/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39972/comments | https://api.github.com/repos/huggingface/transformers/issues/39972/events | https://github.com/huggingface/transformers/issues/39972 | 3,298,407,074 | I_kwDOCUB6oc7EmbKi | 39,972 | Gemma3 with fp16 in inference (I don't know if this change is working in fine-tune) #BUG FIX | {
"login": "DGTell",
"id": 122606028,
"node_id": "U_kgDOB07RzA",
"avatar_url": "https://avatars.githubusercontent.com/u/122606028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DGTell",
"html_url": "https://github.com/DGTell",
"followers_url": "https://api.github.com/users/DGTell/followers",
"following_url": "https://api.github.com/users/DGTell/following{/other_user}",
"gists_url": "https://api.github.com/users/DGTell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DGTell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DGTell/subscriptions",
"organizations_url": "https://api.github.com/users/DGTell/orgs",
"repos_url": "https://api.github.com/users/DGTell/repos",
"events_url": "https://api.github.com/users/DGTell/events{/privacy}",
"received_events_url": "https://api.github.com/users/DGTell/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T23:46:26 | 2025-09-08T11:13:38 | 2025-09-08T11:13:38 | NONE | null | null | null | null | ### First of all, I want to say that I’m not a programmer, and I don’t know much about GitHub.
(This post was translated with ChatGPT because my English isn’t great)
I don’t know if this issue is just with me or if it affects everyone, but **I managed to fix the problem** where, during inference with Gemma3-4B-it (I think it’s the same for other sizes), NaN values appeared in fp16.
### If Gemma3 works fine in fp16 for everyone reading this, then please just delete this post. Pleeeaseeee…
I just want to share some thoughts. If you find this useless, just scroll to the end :3
- [My personal experience and opinion](#My-personal-experience-and-opinion)
- [Where the overflow happens](#Where-the-overflow-happens)
- [The fix](#The-fix)
- [That’s it](#Thats-it)
# My personal experience and opinion
My hardware isn’t very powerful, so fp16 was my only hope. But with this model, CUDA kernel crashed almost immediately. I thought, “It’s 100% vision-tower’s fault,” and tried loading those layers in bf16 (simulated on Windows 11) while keeping the rest in fp16, but as I said, I’ve only been using Python for less than a month – so I failed.
After another failed attempt to load the model in fp16, I got frustrated and dove into GitHub forums (and here), and Hugging Face, but since I’m not good at searching, I couldn’t find posts about this specific problem. I did see an inspiring post where someone (or some team) fixed a similar issue for a diffusion model.
That’s when I realized: I need to get into the transformers/pytorch code and fix it myself.
I went into the file that threw the CUDA kernel error `.../generation/utils.py`, and for the first time saw that the crash was caused by NaN. Together with ChatGPT, I tried to find the root cause, but after a whole day of debugging, we found nothing.
A few days later, I came back to the problem, and I don’t remember how, but I ended up in `.../models/gemma3/modeling_gemma3.py` (Yeah, I know it’s auto-generated, but for quick fixes, it’s perfect).
So, I figured out: CUDA doesn’t crash right away, but after 5–10 seconds. That means there’s an **overflow** (or blowup) of values. Want to know my debugging process? ChatGPT told me the most likely spots for float16 overflow, and I just spammed `print('Variable name:', variable.max().item())` INSIDE EVERY def forward. My console wanted to cry. But I was happy as a kid, because with every printout ChatGPT pointed out what looked suspicious. He even wrote a function to detect exactly where Inf/NaN first appeared during a forward. At one point, my script had around 130 print statements just for debugging.
And I found exactly where the NaN (and before that, inf) appeared! Finally, here’s something useful for you:
# Where the overflow happens
The reason is in the following lines (I don’t know how to make a diff format in markdown to highlight lines, sorry):
## First cause:
```python
132 class Gemma3RMSNorm(nn.Module):
133 def __init__(self, dim: int, eps: float = 1e-6):
134 super().__init__()
135 self.eps = eps
136 self.weight = nn.Parameter(torch.zeros(dim))
137
138 def _norm(self, x):
139 return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
140
141 def forward(self, x):
142 output = self._norm(x.float())
143 # Llama does x.to(float16) * w whilst Gemma3 is (x * w).to(float16)
144 # See https://github.com/huggingface/transformers/pull/29402
145 output = output * (1.0 + self.weight.float())
146 return output.type_as(x)
```
## Second cause:
```python
373 residual = hidden_states
374
375 hidden_states = self.input_layernorm(hidden_states)
376
377 # apply global RoPE to non-sliding layer only
378 if self.self_attn.is_sliding:
379 position_embeddings = position_embeddings_local
380 else:
381 position_embeddings = position_embeddings_global
382
383 hidden_states, self_attn_weights = self.self_attn(
384 hidden_states=hidden_states,
385 position_embeddings=position_embeddings,
386 attention_mask=attention_mask,
387 position_ids=position_ids,
388 past_key_value=past_key_value,
389 output_attentions=output_attentions,
390 use_cache=use_cache,
391 cache_position=cache_position,
392 **kwargs,
393 )
394 hidden_states = self.post_attention_layernorm(hidden_states)
395 hidden_states = residual + hidden_states
396
397 residual = hidden_states
398 hidden_states = self.pre_feedforward_layernorm(hidden_states)
399 hidden_states = self.mlp(hidden_states)
400 hidden_states = self.post_feedforward_layernorm(hidden_states)
401 hidden_states = residual + hidden_states
```
The first cause may not be obvious, but for some reason, the model weights there keep changing. ChatGPT said this shouldn’t happen during inference (with my inference code, it’s fine). So I added to my code:
```python
for param in model.parameters():
param.requires_grad = False
```
And this seemed to fix the problem—no more overflows in `return output.type_as(x)`, though I’m not 100% sure because I forgot what print was showing after my fix. Yeah, since we’re returning to fp16, it immediately gives inf (something in gemma3 code changes weights incorrectly, as `print(self.weight.max().item())` kept showing different and increasing values every forward).
The second cause became obvious when I saw this:
```python
373 residual = hidden_states
...
395 hidden_states = residual + hidden_states
396
397 residual = hidden_states
...
401 hidden_states = residual + hidden_states
```
This basically means that with each forward pass, the values keep increasing—and sooner or later, even fp64 will overflow.
# The fix:
## It’s very simple:
```python
142 output = self._norm(x.float())
143 # Llama does x.to(float16) * w whilst Gemma3 is (x * w).to(float16)
144 # See https://github.com/huggingface/transformers/pull/29402
145 output = output * (1.0 + self.weight.float())
146+ fp16_max = 65503.9
147+ output = torch.clamp(output, min=-fp16_max, max=fp16_max)
148 return output.type_as(x)
```
## And…
```python
397+ hidden_states = (residual + hidden_states).float()
398+ fp16_max = 65503.9
399+ hidden_states = torch.clamp(hidden_states, min=-fp16_max, max=fp16_max)
400+ hidden_states = hidden_states.half()
401
402 residual = hidden_states
403 hidden_states = self.pre_feedforward_layernorm(hidden_states)
404 hidden_states = self.mlp(hidden_states)
405 hidden_states = self.post_feedforward_layernorm(hidden_states)
406+ hidden_states = (residual + hidden_states).float()
407+ hidden_states = torch.clamp(hidden_states, min=-fp16_max, max=fp16_max)
408+ hidden_states = hidden_states.half()
```
That’s it—fp16 works for me now! And, for some reason, it works even better than the original fp32/bf16 (sometimes the model used to generate gibberish for no reason). But I want to be clear:
**THIS CODE IS NOT IDEAL** Yes, it solves the NaN problem, but for some reason, inf still appears in hidden_states, but then disappears, and when the model generates tokens, everything is fine. But the fact that in the original code, hidden_states in fp32 becomes ~264,000.0 is NOT NORMAL. Nor is it normal for model weights to change unless include this in inference code:
```python
for param in model.parameters():
param.requires_grad = False
```
Even though I tried adding `model = Gemma3ForConditionalGeneration.from_pretrained(...).eval()` and `with torch.inference_mode():` And the example from Gemma3’s Hugging Face card has the same issues, so it’s NOT a problem with my inference code!
## That’s it
I’m not going to submit a PR or edit the repo for two reasons:
1. This is a hack; it shouldn’t be necessary in the first place. (According to ChatGPT, weights should not change during inference. As for hidden_states... I don’t think they should behave this way either…)
2. It could be fixed better (like `if dtype == fp16:` or something—BUT I’M NOT A PROGRAMMER!!!) | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39972/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39971/comments | https://api.github.com/repos/huggingface/transformers/issues/39971/events | https://github.com/huggingface/transformers/pull/39971 | 3,298,250,470 | PR_kwDOCUB6oc6ieD86 | 39,971 | Fix missing video inputs for PerceptionLM. | {
"login": "shuminghu",
"id": 2934295,
"node_id": "MDQ6VXNlcjI5MzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2934295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuminghu",
"html_url": "https://github.com/shuminghu",
"followers_url": "https://api.github.com/users/shuminghu/followers",
"following_url": "https://api.github.com/users/shuminghu/following{/other_user}",
"gists_url": "https://api.github.com/users/shuminghu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuminghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuminghu/subscriptions",
"organizations_url": "https://api.github.com/users/shuminghu/orgs",
"repos_url": "https://api.github.com/users/shuminghu/repos",
"events_url": "https://api.github.com/users/shuminghu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuminghu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-06T22:18:42 | 2025-08-07T16:18:47 | 2025-08-07T15:54:46 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39971",
"html_url": "https://github.com/huggingface/transformers/pull/39971",
"diff_url": "https://github.com/huggingface/transformers/pull/39971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39971.patch",
"merged_at": "2025-08-07T15:54:46"
} |
Critical: Fixes missing video input for PerceptionLM (accidentally removed in [PR](https://github.com/huggingface/transformers/pull/39583))
Minor: Add support for vanilla image that only has C,H,W dims but not tiles dim.
This is non-default image shapes used in PLM but it's useful in demos and low-resoure devices.
e.g., in just added "PLM Simple Fine-tuning Example" under
https://huggingface.co/facebook/Perception-LM-1B#plm-usage
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39971/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39970/comments | https://api.github.com/repos/huggingface/transformers/issues/39970/events | https://github.com/huggingface/transformers/pull/39970 | 3,298,137,794 | PR_kwDOCUB6oc6idrHh | 39,970 | Add Keypoint Matcher pipeline | {
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T21:27:58 | 2025-08-26T14:36:28 | 2025-08-26T14:26:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39970",
"html_url": "https://github.com/huggingface/transformers/pull/39970",
"diff_url": "https://github.com/huggingface/transformers/pull/39970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39970.patch",
"merged_at": "2025-08-26T14:26:57"
} | # What does this PR do?
Implements `keypoint-matcher` pipeline.
Quite basic for now, let me know if I should add things.
I added tests to have single and multiple pairs as well as checking it correctly fails when there is only one image provided.
Committed on top of #39968 but will be rebased on main once the fix is merged
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@qubvel @stevhliu | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39970/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39969/comments | https://api.github.com/repos/huggingface/transformers/issues/39969/events | https://github.com/huggingface/transformers/issues/39969 | 3,298,128,111 | I_kwDOCUB6oc7ElXDv | 39,969 | Finetune `gpt-oss-20b` with `mxfp4` quantization | {
"login": "eliotjones1",
"id": 12123338,
"node_id": "MDQ6VXNlcjEyMTIzMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/12123338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliotjones1",
"html_url": "https://github.com/eliotjones1",
"followers_url": "https://api.github.com/users/eliotjones1/followers",
"following_url": "https://api.github.com/users/eliotjones1/following{/other_user}",
"gists_url": "https://api.github.com/users/eliotjones1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliotjones1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliotjones1/subscriptions",
"organizations_url": "https://api.github.com/users/eliotjones1/orgs",
"repos_url": "https://api.github.com/users/eliotjones1/repos",
"events_url": "https://api.github.com/users/eliotjones1/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliotjones1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T21:24:19 | 2025-08-14T12:41:15 | 2025-08-06T22:47:26 | NONE | null | null | null | null | Apologies if this is the wrong issue format -- I am not confident enough to say that this is for sure a bug and not just user error. I am currently unable to finetune (using peft/trl) the new oss openai model with quantization.
The relevant packages and their versions are:
```
transformers 4.56.0.dev0 /{installed from gh repo}/training/transformers
triton 3.4.0
triton-kernels 1.0.0
```
With these, I am able to train in bf16; `MXFP4 quantization requires triton >= 3.4.0 and kernels installed, we will default to dequantizing the model to bf16`
So, what do I do? I run: `uv pip install kernels`
Then, when training, I get the following error:
```
ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.MXFP4 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.MXFP4
```
I'm operating on a node of H100s for reference, and more or less following the recipe [here](https://cookbook.openai.com/articles/gpt-oss/fine-tune-transfomers) on a custom dataset.
Has anyone else had success here? | {
"login": "eliotjones1",
"id": 12123338,
"node_id": "MDQ6VXNlcjEyMTIzMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/12123338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliotjones1",
"html_url": "https://github.com/eliotjones1",
"followers_url": "https://api.github.com/users/eliotjones1/followers",
"following_url": "https://api.github.com/users/eliotjones1/following{/other_user}",
"gists_url": "https://api.github.com/users/eliotjones1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliotjones1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliotjones1/subscriptions",
"organizations_url": "https://api.github.com/users/eliotjones1/orgs",
"repos_url": "https://api.github.com/users/eliotjones1/repos",
"events_url": "https://api.github.com/users/eliotjones1/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliotjones1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39969/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39968/comments | https://api.github.com/repos/huggingface/transformers/issues/39968/events | https://github.com/huggingface/transformers/pull/39968 | 3,297,901,294 | PR_kwDOCUB6oc6ic2ii | 39,968 | [superglue] Fixed the way batch mask was applied to the scores before match assignment computation | {
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-08-06T20:04:14 | 2025-08-09T17:37:36 | 2025-08-07T08:49:39 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39968",
"html_url": "https://github.com/huggingface/transformers/pull/39968",
"diff_url": "https://github.com/huggingface/transformers/pull/39968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39968.patch",
"merged_at": "2025-08-07T08:49:39"
} | # What does this PR do?
Fixes the way mask is applied to the scores in SuperPoint.
Realized in some cases not covered by the tests that I end up with the following error :
```python
self = SuperGlueImageProcessor {
"do_grayscale": true,
"do_rescale": true,
"do_resize": true,
"image_processor_type":...ssor",
"resample": 2,
"rescale_factor": 0.00392156862745098,
"size": {
"height": 480,
"width": 640
}
}
outputs = ModelOutput([('matches', tensor([[[ -1, -1, -1, ..., -1, -1, -1],
[ -1, 125, 137, ..., -1, -1, -1]]...0, 0.0000]]]])), ('mask', tensor([[[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 0, 0, 0]]], dtype=torch.int32))])
target_sizes = [[(768, 1025), (1026, 768)]], threshold = 0.0001
def post_process_keypoint_matching(
self,
outputs: "KeypointMatchingOutput",
target_sizes: Union[TensorType, list[tuple]],
threshold: float = 0.0,
) -> list[dict[str, torch.Tensor]]:
"""
Converts the raw output of [`KeypointMatchingOutput`] into lists of keypoints, scores and descriptors
with coordinates absolute to the original image sizes.
Args:
outputs ([`KeypointMatchingOutput`]):
Raw outputs of the model.
target_sizes (`torch.Tensor` or `list[tuple[tuple[int, int]]]`, *optional*):
Tensor of shape `(batch_size, 2, 2)` or list of tuples of tuples (`tuple[int, int]`) containing the
target size `(height, width)` of each image in the batch. This must be the original image size (before
any processing).
threshold (`float`, *optional*, defaults to 0.0):
Threshold to filter out the matches with low scores.
Returns:
`list[Dict]`: A list of dictionaries, each dictionary containing the keypoints in the first and second image
of the pair, the matching scores and the matching indices.
"""
if outputs.mask.shape[0] != len(target_sizes):
raise ValueError("Make sure that you pass in as many target sizes as the batch dimension of the mask")
if not all(len(target_size) == 2 for target_size in target_sizes):
raise ValueError("Each element of target_sizes must contain the size (h, w) of each image of the batch")
if isinstance(target_sizes, list):
image_pair_sizes = torch.tensor(target_sizes, device=outputs.mask.device)
else:
if target_sizes.shape[1] != 2 or target_sizes.shape[2] != 2:
raise ValueError(
"Each element of target_sizes must contain the size (h, w) of each image of the batch"
)
image_pair_sizes = target_sizes
keypoints = outputs.keypoints.clone()
keypoints = keypoints * image_pair_sizes.flip(-1).reshape(-1, 2, 1, 2)
keypoints = keypoints.to(torch.int32)
results = []
for mask_pair, keypoints_pair, matches, scores in zip(
outputs.mask, keypoints, outputs.matches[:, 0], outputs.matching_scores[:, 0]
):
mask0 = mask_pair[0] > 0
mask1 = mask_pair[1] > 0
keypoints0 = keypoints_pair[0][mask0]
keypoints1 = keypoints_pair[1][mask1]
matches0 = matches[mask0]
scores0 = scores[mask0]
# Filter out matches with low scores
valid_matches = torch.logical_and(scores0 > threshold, matches0 > -1)
matched_keypoints0 = keypoints0[valid_matches]
> matched_keypoints1 = keypoints1[matches0[valid_matches]]
E IndexError: index 561 is out of bounds for dimension 0 with size 561
src/transformers/models/superglue/image_processing_superglue.py:406: IndexError
```
This means that a keypoint in image 0 got assigned a match to an unexistant keypoint in image 1, here index 561 should not appear in the matches since there are at most 561 valid matches on the other image. The way the score is filled by the mask here is invalid :
https://github.com/huggingface/transformers/blob/743bb5f52e29d83e5d3fd3db4d83146bd4edce28/src/transformers/models/superglue/modeling_superglue.py#L677-L680
In the case of a keypoints tensor of max size 5, imagine there are 2 and 4 valid keypoints in image 1 and 2 respectively, the resulting mask is the following :
```python
1, 1, 1, 1, 0
1, 1, 1, 1, 0
1, 1, 1, 1, 0
1, 1, 1, 1, 0
0, 0, 0, 0, 0
```
where it should have been :
```python
1, 1, 1, 1, 0
1, 1, 1, 1, 0
0, 0, 0, 0, 0
0, 0, 0, 0, 0
0, 0, 0, 0, 0
```
The following fixees the issue :
```python
if mask is not None:
mask = mask.reshape(batch_size, 2, num_keypoints)
mask0 = mask[:, 0].unsqueeze(2)
mask1 = mask[:, 1].unsqueeze(1)
mask = torch.logical_and(mask0, mask1)
scores = scores.masked_fill(mask == 0, torch.finfo(scores.dtype).min)
```
I've added tests to make sure there is not matches that are beyond the scope of the mask.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@qubvel | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39968/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39967/comments | https://api.github.com/repos/huggingface/transformers/issues/39967/events | https://github.com/huggingface/transformers/pull/39967 | 3,297,884,118 | PR_kwDOCUB6oc6icyv7 | 39,967 | Bump transformers from 4.48.0 to 4.53.0 in /examples/tensorflow/language-modeling-tpu | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | closed | false | null | [] | null | [] | 2025-08-06T19:57:21 | 2025-08-07T11:13:49 | 2025-08-07T11:13:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39967",
"html_url": "https://github.com/huggingface/transformers/pull/39967",
"diff_url": "https://github.com/huggingface/transformers/pull/39967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39967.patch",
"merged_at": "2025-08-07T11:13:48"
} | Bumps [transformers](https://github.com/huggingface/transformers) from 4.48.0 to 4.53.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h2>Release v4.53.0</h2>
<h3>Gemma3n</h3>
<p>Gemma 3n models are designed for efficient execution on low-resource devices. They are capable of multimodal input, handling text, image, video, and audio input, and generating text outputs, with open weights for pre-trained and instruction-tuned variants. These models were trained with data in over 140 spoken languages.</p>
<p>Gemma 3n models use selective parameter activation technology to reduce resource requirements. This technique allows the models to operate at an effective size of 2B and 4B parameters, which is lower than the total number of parameters they contain. For more information on Gemma 3n's efficient parameter management technology, see the <a href="https://ai.google.dev/gemma/docs/gemma-3n#parameters">Gemma 3n</a> page.</p>
<p><img src="https://github.com/user-attachments/assets/858cb034-364d-4eb6-8de8-4a0b5eaff3d7" alt="image" /></p>
<pre lang="python"><code>from transformers import pipeline
import torch
<p>pipe = pipeline(
"image-text-to-text",
torch_dtype=torch.bfloat16,
model="google/gemma-3n-e4b",
device="cuda",
)
output = pipe(
"<a href="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg">https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg</a>",
text="<image_soft_token> in this image, there is"
)</p>
<p>print(output)
</code></pre></p>
<h3>Dia</h3>
<p><img src="https://github.com/user-attachments/assets/bf86e887-e4f4-4222-993d-f5eac58f8040" alt="image" /></p>
<p>Dia is an opensource text-to-speech (TTS) model (1.6B parameters) developed by <a href="https://huggingface.co/nari-labs">Nari Labs</a>.
It can generate highly realistic dialogue from transcript including nonverbal communications such as laughter and coughing.
Furthermore, emotion and tone control is also possible via audio conditioning (voice cloning).</p>
<p><strong>Model Architecture:</strong>
Dia is an encoder-decoder transformer based on the original transformer architecture. However, some more modern features such as
rotational positional embeddings (RoPE) are also included. For its text portion (encoder), a byte tokenizer is utilized while
for the audio portion (decoder), a pretrained codec model <a href="https://github.com/huggingface/transformers/blob/HEAD/dac.md">DAC</a> is used - DAC encodes speech into discrete codebook
tokens and decodes them back into audio.</p>
<ul>
<li>Add Dia model by <a href="https://github.com/buttercrab"><code>@buttercrab</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/38405">#38405</a></li>
</ul>
<h3>Kyutai Speech-to-Text</h3>
<!-- raw HTML omitted -->
<p>Kyutai STT is a speech-to-text model architecture based on the <a href="https://huggingface.co/docs/transformers/en/model_doc/mimi">Mimi codec</a>, which encodes audio into discrete tokens in a streaming fashion, and a <a href="https://huggingface.co/docs/transformers/en/model_doc/moshi">Moshi-like</a> autoregressive decoder. Kyutai’s lab has released two model checkpoints:</p>
<ul>
<li><a href="https://huggingface.co/kyutai/stt-1b-en_fr">kyutai/stt-1b-en_fr</a>: a 1B-parameter model capable of transcribing both English and French</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/67ddc82fbc7e52c6f42a395b4a6d278c55b77a39"><code>67ddc82</code></a> Release: v4.53.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/0a8081b03d118da9a8c3fa143a03afe54a5c624e"><code>0a8081b</code></a> [Modeling] Fix encoder CPU offloading for whisper (<a href="https://redirect.github.com/huggingface/transformers/issues/38994">#38994</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/c63cfd6a833d629a74c098933017c61dd755969d"><code>c63cfd6</code></a> Gemma 3n (<a href="https://redirect.github.com/huggingface/transformers/issues/39059">#39059</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/3e5cc1285503bbdb6a0a3e173b5ae90566862215"><code>3e5cc12</code></a> [tests] remove tests from libraries with deprecated support (flax, tensorflow...</li>
<li><a href="https://github.com/huggingface/transformers/commit/cfff7ca9a27280338c6a57dfa7722dcf44f51a87"><code>cfff7ca</code></a> [Whisper] Pipeline: handle long form generation (<a href="https://redirect.github.com/huggingface/transformers/issues/35750">#35750</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/02ecdcfc0f7d81e90a9c8e7f9e6d636123a84254"><code>02ecdcf</code></a> add _keep_in_fp32_modules_strict (<a href="https://redirect.github.com/huggingface/transformers/issues/39058">#39058</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/d973e62fdd86d64259f87debc46bbcbf6c7e5de2"><code>d973e62</code></a> fix condition where torch_dtype auto collides with model_kwargs. (<a href="https://redirect.github.com/huggingface/transformers/issues/39054">#39054</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/44b231671db25974cfebcdae34402ad5099bf37a"><code>44b2316</code></a> [qwen2-vl] fix vision attention scaling (<a href="https://redirect.github.com/huggingface/transformers/issues/39043">#39043</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/ae15715df138949328d18e1dd95fd9cb4efb8e09"><code>ae15715</code></a> polishing docs: error fixes for clarity (<a href="https://redirect.github.com/huggingface/transformers/issues/39042">#39042</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/3abeaba7e53512ef9c1314163dd7e462ab405ce6"><code>3abeaba</code></a> Create test for <a href="https://redirect.github.com/huggingface/transformers/issues/38916">#38916</a> (custom generate from local dir with imports) (<a href="https://redirect.github.com/huggingface/transformers/issues/39015">#39015</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.48.0...v4.53.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39967/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39966/comments | https://api.github.com/repos/huggingface/transformers/issues/39966/events | https://github.com/huggingface/transformers/issues/39966 | 3,297,734,272 | I_kwDOCUB6oc7Ej26A | 39,966 | `convert_deepseek_vl_weights_to_hf.py` not included in v4.55.0 release. | {
"login": "rasmi",
"id": 2267370,
"node_id": "MDQ6VXNlcjIyNjczNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2267370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rasmi",
"html_url": "https://github.com/rasmi",
"followers_url": "https://api.github.com/users/rasmi/followers",
"following_url": "https://api.github.com/users/rasmi/following{/other_user}",
"gists_url": "https://api.github.com/users/rasmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rasmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rasmi/subscriptions",
"organizations_url": "https://api.github.com/users/rasmi/orgs",
"repos_url": "https://api.github.com/users/rasmi/repos",
"events_url": "https://api.github.com/users/rasmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rasmi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-06T18:57:34 | 2025-08-07T16:19:38 | 2025-08-07T16:19:38 | CONTRIBUTOR | null | null | null | null |
`convert_deepseek_vl_weights_to_hf.py`, introduced in #36248, is in [main](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deepseek_vl/convert_deepseek_vl_weights_to_hf.py) but not in the [v4.55.0 release](https://github.com/huggingface/transformers/blob/v4.55.0/src/transformers/models/deepseek_vl/convert_deepseek_vl_weights_to_hf.py).
This causes the following two tests to fail:
* [tests/models/deepseek_vl/test_processing_deepseek_vl.py](https://github.com/huggingface/transformers/blob/ac0b4684657cbff1e8eecb0d966d10b71843dca0/tests/models/deepseek_vl/test_processing_deepseek_vl.py#L19)
* [tests/models/deepseek_vl_hybrid/test_processing_deepseek_vl_hybrid.py](https://github.com/huggingface/transformers/blob/ac0b4684657cbff1e8eecb0d966d10b71843dca0/tests/models/deepseek_vl_hybrid/test_processing_deepseek_vl_hybrid.py#L19)
```
ImportError while importing test module 'tests/models/deepseek_vl/test_processing_deepseek_vl.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/models/deepseek_vl/test_processing_deepseek_vl.py:19: in <module>
from transformers.models.deepseek_vl.convert_deepseek_vl_weights_to_hf import CHAT_TEMPLATE
E ModuleNotFoundError: No module named 'transformers.models.deepseek_vl.convert_deepseek_vl_weights_to_hf'
```
```
ImportError while importing test module 'tests/models/deepseek_vl_hybrid/test_processing_deepseek_vl_hybrid.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/models/deepseek_vl_hybrid/test_processing_deepseek_vl_hybrid.py:19: in <module>
from transformers.models.deepseek_vl.convert_deepseek_vl_weights_to_hf import CHAT_TEMPLATE
E ModuleNotFoundError: No module named 'transformers.models.deepseek_vl.convert_deepseek_vl_weights_to_hf'
```
### Who can help?
@geetu040 @zucchini-nlp @ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Reproduction
Run tests on v4.55.0 release.
### Expected behavior
Tests pass. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39966/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39966/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39965/comments | https://api.github.com/repos/huggingface/transformers/issues/39965/events | https://github.com/huggingface/transformers/pull/39965 | 3,297,440,210 | PR_kwDOCUB6oc6ibR7E | 39,965 | Fix HGNetV2 Model Card and Image Classification Pipeline Usage Tips | {
"login": "ducviet00",
"id": 24910916,
"node_id": "MDQ6VXNlcjI0OTEwOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/24910916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ducviet00",
"html_url": "https://github.com/ducviet00",
"followers_url": "https://api.github.com/users/ducviet00/followers",
"following_url": "https://api.github.com/users/ducviet00/following{/other_user}",
"gists_url": "https://api.github.com/users/ducviet00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ducviet00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ducviet00/subscriptions",
"organizations_url": "https://api.github.com/users/ducviet00/orgs",
"repos_url": "https://api.github.com/users/ducviet00/repos",
"events_url": "https://api.github.com/users/ducviet00/events{/privacy}",
"received_events_url": "https://api.github.com/users/ducviet00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T17:12:10 | 2025-08-07T16:33:30 | 2025-08-07T16:33:30 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39965",
"html_url": "https://github.com/huggingface/transformers/pull/39965",
"diff_url": "https://github.com/huggingface/transformers/pull/39965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39965.patch",
"merged_at": "2025-08-07T16:33:30"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
1. The HGNetV2 model card previously referenced an incorrect paper and has now been updated to accurately describe the model, following the guidelines in https://github.com/huggingface/transformers/issues/36979.
2. I also corrected the contributed model card to fix usage tips for the image-classification pipeline: the pipeline expects `inputs` as a positional argument, not `images`.
3. Additionally, I fixed a typo by closing the `<hfoptions>` tag in the dit section.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39965/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39964/comments | https://api.github.com/repos/huggingface/transformers/issues/39964/events | https://github.com/huggingface/transformers/pull/39964 | 3,297,193,476 | PR_kwDOCUB6oc6iacDH | 39,964 | fix glm4v image process | {
"login": "KeyKy",
"id": 2967075,
"node_id": "MDQ6VXNlcjI5NjcwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2967075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeyKy",
"html_url": "https://github.com/KeyKy",
"followers_url": "https://api.github.com/users/KeyKy/followers",
"following_url": "https://api.github.com/users/KeyKy/following{/other_user}",
"gists_url": "https://api.github.com/users/KeyKy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeyKy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeyKy/subscriptions",
"organizations_url": "https://api.github.com/users/KeyKy/orgs",
"repos_url": "https://api.github.com/users/KeyKy/repos",
"events_url": "https://api.github.com/users/KeyKy/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeyKy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T15:41:17 | 2025-08-06T16:46:58 | 2025-08-06T16:46:58 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39964",
"html_url": "https://github.com/huggingface/transformers/pull/39964",
"diff_url": "https://github.com/huggingface/transformers/pull/39964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39964.patch",
"merged_at": "2025-08-06T16:46:58"
} | @amyeroberts , @qubvel
Issue: shortest_edge and longest_edge in preprocess_config.json are being ignored during GLM-4V image preprocessing. Please investigate and fix.
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39964/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39963/comments | https://api.github.com/repos/huggingface/transformers/issues/39963/events | https://github.com/huggingface/transformers/issues/39963 | 3,296,743,556 | I_kwDOCUB6oc7EgFCE | 39,963 | change `dataloader_persistent_workers` default value to `True` | {
"login": "farbodbj",
"id": 110523279,
"node_id": "U_kgDOBpZzjw",
"avatar_url": "https://avatars.githubusercontent.com/u/110523279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farbodbj",
"html_url": "https://github.com/farbodbj",
"followers_url": "https://api.github.com/users/farbodbj/followers",
"following_url": "https://api.github.com/users/farbodbj/following{/other_user}",
"gists_url": "https://api.github.com/users/farbodbj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farbodbj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farbodbj/subscriptions",
"organizations_url": "https://api.github.com/users/farbodbj/orgs",
"repos_url": "https://api.github.com/users/farbodbj/repos",
"events_url": "https://api.github.com/users/farbodbj/events{/privacy}",
"received_events_url": "https://api.github.com/users/farbodbj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-06T13:34:04 | 2025-10-26T08:03:00 | null | NONE | null | null | null | null | https://github.com/huggingface/transformers/blob/82eb67e62a0a66b46647ff4132c173d2f3b8b54f/src/transformers/training_args.py#L1339
As described in the documentation, setting this configuration to `True` will cause training speedup but will cause more RAM usage, and the default is set to True.
I believe this configuration should default to `True` for two reasons:
1- Recreation of workers while training, bottlenecks the GPU and causes significant slowdown, potentially doubling the training time (benchmarked on a single A100 fine-tuning whisper-large-v3)
2- In case practitioners want to mitigate this slowdown, which is visible by the square-wave pattern in the GPU utilization, they have to go through a lot of configuration and perform many hours of time-consuming tests
I request changing the default value of `dataloader_persistent_workers` to True
If the maintainers of this repo are OK with this decision i will submit the PR promptly | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39963/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39962/comments | https://api.github.com/repos/huggingface/transformers/issues/39962/events | https://github.com/huggingface/transformers/pull/39962 | 3,296,704,354 | PR_kwDOCUB6oc6iYv0f | 39,962 | Use torch._check instead of a test to make the model Gemma3 exportable | {
"login": "xadupre",
"id": 22452781,
"node_id": "MDQ6VXNlcjIyNDUyNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/22452781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xadupre",
"html_url": "https://github.com/xadupre",
"followers_url": "https://api.github.com/users/xadupre/followers",
"following_url": "https://api.github.com/users/xadupre/following{/other_user}",
"gists_url": "https://api.github.com/users/xadupre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xadupre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xadupre/subscriptions",
"organizations_url": "https://api.github.com/users/xadupre/orgs",
"repos_url": "https://api.github.com/users/xadupre/repos",
"events_url": "https://api.github.com/users/xadupre/events{/privacy}",
"received_events_url": "https://api.github.com/users/xadupre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-06T13:23:15 | 2025-08-06T15:08:10 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39962",
"html_url": "https://github.com/huggingface/transformers/pull/39962",
"diff_url": "https://github.com/huggingface/transformers/pull/39962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39962.patch",
"merged_at": null
} | # What does this PR do?
torch.export.export fails on a tests used to raise an exception if not true. This is replaced by torch._check to avoid torch.export.export complain about it.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39962/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39961/comments | https://api.github.com/repos/huggingface/transformers/issues/39961/events | https://github.com/huggingface/transformers/issues/39961 | 3,296,600,078 | I_kwDOCUB6oc7EfiAO | 39,961 | Calling `trainer.evaluate()` before `trainer.train()` with FSDP 2 raises `ValueError: When using FSDP2, a model and optimizer must be passed together to `Accelerator.prepare()...` | {
"login": "RonanFR",
"id": 10586126,
"node_id": "MDQ6VXNlcjEwNTg2MTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/10586126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanFR",
"html_url": "https://github.com/RonanFR",
"followers_url": "https://api.github.com/users/RonanFR/followers",
"following_url": "https://api.github.com/users/RonanFR/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanFR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanFR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanFR/subscriptions",
"organizations_url": "https://api.github.com/users/RonanFR/orgs",
"repos_url": "https://api.github.com/users/RonanFR/repos",
"events_url": "https://api.github.com/users/RonanFR/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanFR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-06T12:53:42 | 2025-10-06T23:56:24 | 2025-09-14T08:02:54 | NONE | null | null | null | null | ### System Info
**Environement:**
- `transformers` version: 4.55.0
- Platform: Linux-5.15.0-1087-aws-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.4.4
- Accelerate version: 1.9.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- fsdp_config: {'fsdp_activation_checkpointing': False, 'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_offload_params': False, 'fsdp_reshard_after_forward': True, 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_version': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.17.4
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): 2.17.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: Tesla T4
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Here is a Minimum Reproducible Example:
I am executing command `accelerate launch __main__.py` on a machine equipped with 4 T4 GPUs, where
- the accelerate config file contains the following content:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
enable_cpu_affinity: false
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: true
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_version: 2
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
- the content of the `__main__.py` file is:
```
import torch
from transformers import Trainer, PreTrainedTokenizerBase, TrainingArguments, AutoModelForSequenceClassification, AutoTokenizer
from datasets import load_dataset
from peft import (
LoraConfig,
get_peft_model,
)
import mlflow
def main():
mlflow.set_experiment(experiment_id="712770214878318")
# Load model + tokenizer
model = AutoModelForSequenceClassification.from_pretrained(
"bigscience/bloom-560m",
num_labels=5,
device_map="cuda"
)
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m")
print({p.data.dtype for p in model.parameters()})
tokenizer.add_special_tokens({"pad_token": tokenizer.eos_token})
model.resize_token_embeddings(len(tokenizer))
model.config.pad_token_id = tokenizer.pad_token_id
# Load + format dataset
dataset = load_dataset("yelp_review_full")["train"].select(range(100))
def tokenize_function(examples):
return tokenizer(
examples["text"],
max_length=20,
padding="max_length",
truncation=True
)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# Applying PEFT
peft_model = get_peft_model(
model,
LoraConfig(
peft_type="LORA",
task_type="SEQ_CLS",
r=8,
lora_alpha=16,
lora_dropout=0.05,
),
)
# Train
training_args = TrainingArguments(
per_device_train_batch_size=2,
num_train_epochs=1,
torch_compile=False,
logging_strategy="steps",
logging_steps=1,
dataloader_pin_memory=True,
output_dir="/tmp/test1",
overwrite_output_dir=True,
ddp_find_unused_parameters=True,
use_cpu=False
)
trainer = Trainer(
model=peft_model,
train_dataset=tokenized_datasets,
eval_dataset=tokenized_datasets,
args=training_args,
processing_class=PreTrainedTokenizerBase,
)
trainer.model_accepts_loss_kwargs = False
trainer.evaluate()
if __name__ == "__main__":
main()
```
I am getting the following error: `ValueError: When using FSDP2, a model and optimizer must be passed together to `Accelerator.prepare()` as the optimizer needs to have its parameters modified after the model is converted.`
When `trainer.train()` is called, the problem does not occur.
### Expected behavior
This script should work just fine.
From what I understand (see this issue: https://github.com/huggingface/accelerate/issues/3476), the problem with FSDP 2 and the `transformers` library has already been fixed a couple of months ago, with these lines: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2332-L2334
However, it was only fixed in the `_inner_training_loop` method. But when calling `trainer.evaluate()` before any training, the initial problem still persists. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39961/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39961/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39960/comments | https://api.github.com/repos/huggingface/transformers/issues/39960/events | https://github.com/huggingface/transformers/pull/39960 | 3,296,464,611 | PR_kwDOCUB6oc6iX65T | 39,960 | Gemma3 fixes | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T12:12:58 | 2025-08-07T07:57:21 | 2025-08-07T07:57:21 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39960",
"html_url": "https://github.com/huggingface/transformers/pull/39960",
"diff_url": "https://github.com/huggingface/transformers/pull/39960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39960.patch",
"merged_at": "2025-08-07T07:57:21"
} | This PR fixes several `gemma3` tests:
- on any GPU, there is a possible multi-devices error that can arise in the forward of `Gemma3Model` which was fixed by specifying the device in a tensor creation
- on AMD MI300, there are now new expectations for some generation tests
cc. @zucchini-nlp maybe because this is a multi-modal model | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39960/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39959/comments | https://api.github.com/repos/huggingface/transformers/issues/39959/events | https://github.com/huggingface/transformers/pull/39959 | 3,296,242,626 | PR_kwDOCUB6oc6iXKZA | 39,959 | Fix grammatical error in MoE variable name: expert_hitted → expert_hit, hitted_experts → hit_experts | {
"login": "Mihonarium",
"id": 24436954,
"node_id": "MDQ6VXNlcjI0NDM2OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/24436954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mihonarium",
"html_url": "https://github.com/Mihonarium",
"followers_url": "https://api.github.com/users/Mihonarium/followers",
"following_url": "https://api.github.com/users/Mihonarium/following{/other_user}",
"gists_url": "https://api.github.com/users/Mihonarium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mihonarium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mihonarium/subscriptions",
"organizations_url": "https://api.github.com/users/Mihonarium/orgs",
"repos_url": "https://api.github.com/users/Mihonarium/repos",
"events_url": "https://api.github.com/users/Mihonarium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mihonarium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T11:00:57 | 2025-08-06T17:09:09 | 2025-08-06T15:45:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39959",
"html_url": "https://github.com/huggingface/transformers/pull/39959",
"diff_url": "https://github.com/huggingface/transformers/pull/39959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39959.patch",
"merged_at": "2025-08-06T15:45:20"
} | # What does this PR do?
Fixes a grammatical error in variable naming across all Mixture of Experts (MoE) implementations. The variables `expert_hitted` and `hitted_experts` are grammatically incorrect: the past tense/past participle of "hit" is "hit", not "hitted".
Fixes #39955.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
## Who can review?
Anyone can review. This is a simple grammatical fix in internal variable naming. No functionality changes.
## Changes made
Renamed `expert_hitted` → `expert_hit` in:
- `src/transformers/models/gpt_oss/modular_gpt_oss.py`
- `src/transformers/models/mixtral/modular_mixtral.py`
- `src/transformers/models/ernie4_5_moe/modular_ernie4_5_moe.py`
- `src/transformers/models/qwen3_moe/modular_qwen3_moe.py`
- `src/transformers/models/gpt_oss/modeling_gpt_oss.py`
- `src/transformers/models/mixtral/modeling_mixtral.py`
- `src/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py`
- `src/transformers/models/qwen3_moe/modeling_qwen3_moe.py`
- `src/transformers/models/qwen2_moe/modeling_qwen2_moe.py`
- `src/transformers/models/minimax/modeling_minimax.py`
Renamed `hitted_experts` -> `hit_experts` in:
- `src/transformers/integrations/mxfp4.py`
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39959/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39958/comments | https://api.github.com/repos/huggingface/transformers/issues/39958/events | https://github.com/huggingface/transformers/issues/39958 | 3,296,240,560 | I_kwDOCUB6oc7EeKOw | 39,958 | TypeError: Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.(this issue im getting when using doc-ocr) | {
"login": "2ayush2",
"id": 80869490,
"node_id": "MDQ6VXNlcjgwODY5NDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/80869490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2ayush2",
"html_url": "https://github.com/2ayush2",
"followers_url": "https://api.github.com/users/2ayush2/followers",
"following_url": "https://api.github.com/users/2ayush2/following{/other_user}",
"gists_url": "https://api.github.com/users/2ayush2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2ayush2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2ayush2/subscriptions",
"organizations_url": "https://api.github.com/users/2ayush2/orgs",
"repos_url": "https://api.github.com/users/2ayush2/repos",
"events_url": "https://api.github.com/users/2ayush2/events{/privacy}",
"received_events_url": "https://api.github.com/users/2ayush2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-06T11:00:13 | 2025-08-28T18:11:34 | null | NONE | null | null | null | null | ### Feature request
(venv) PS C:\Treeleaf-project\smartid-processor\smartid\service\models\dots.ocr> python .\demo_hf.py
Loading checkpoint shards: 100%|████████████████████████████████████████████████| 2/2 [00:15<00:00, 7.52s/it]
The image processor of type `Qwen2VLImageProcessor` is now loaded as a fast processor by default, even if the model checkpoint was saved with a slow processor. This is a breaking change and may produce slightly different outputs. To continue using the slow processor, instantiate this class with `use_fast=False`. Note that this behavior will be extended to all models in a future release.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
Traceback (most recent call last):
File "C:\Treeleaf-project\smartid-processor\smartid\service\models\dots.ocr\demo_hf.py", line 65, in <module>
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\models\auto\processing_auto.py", line 385, in from_pretrained
return processor_class.from_pretrained(
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\processing_utils.py", line 1314, in from_pretrained
return cls.from_args_and_dict(args, processor_dict, **kwargs)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\processing_utils.py", line 1115, in from_args_and_dict
processor = cls(*args, **valid_kwargs)
File "C:\Users\User\.cache\huggingface\modules\transformers_modules\DotsOCR\configuration_dots.py", line 71, in __init__
super().__init__(image_processor, tokenizer, chat_template=chat_template)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\models\qwen2_5_vl\processing_qwen2_5_vl.py", line 96, in __init__
super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
15, in from_args_and_dict
15, in from_args_and_dict
15, in from_args_and_dict
processor = cls(*args, **valid_kwargs)
File "C:\Users\User\.cache\huggingface\modules\transformers_modules\DotsOCR\configuration_dots.py", line 71, in __init__
super().__init__(image_processor, tokenizer, chat_template=chat_template)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\models\qwen2_5_vl\processing_qwen2_5_vl.py", line 96, in __init__
super().__init__(image_processor, tokenizer, video_processor, chat_template=chat_template)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\processing_utils.py", line 557, in __init__
self.check_argument_for_proper_class(attribute_name, arg)
File "C:\Treeleaf-project\smartid-processor\venv\lib\site-packages\transformers\processing_utils.py", line 575, in check_argument_for_proper_class
raise TypeError(
TypeError: Received a NoneType for argument video_processor, but a BaseVideoProcessor was expected.
(venv) PS C:\Treeleaf-project\smartid-processor\smartid\service\models\dots.ocr>
### Motivation
helo to fix it
### Your contribution
gf | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39958/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39958/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39956/comments | https://api.github.com/repos/huggingface/transformers/issues/39956/events | https://github.com/huggingface/transformers/pull/39956 | 3,296,181,195 | PR_kwDOCUB6oc6iW9R8 | 39,956 | Harmonize `past_key_value` to `past_key_valueS` everywhere | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T10:41:20 | 2025-08-08T09:53:00 | 2025-08-08T09:52:58 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39956",
"html_url": "https://github.com/huggingface/transformers/pull/39956",
"diff_url": "https://github.com/huggingface/transformers/pull/39956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39956.patch",
"merged_at": "2025-08-08T09:52:58"
} | As per the title. I'm getting annoyed to see both, it was time to finally make it coherent everywhere.
All the changes are made, so I'll only need to remove the decorators after next release (it's only internal modules, so no need for a long deprecation cycle)
Also reapplied modular to `examples/modular-transformer` | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39956/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39955/comments | https://api.github.com/repos/huggingface/transformers/issues/39955/events | https://github.com/huggingface/transformers/issues/39955 | 3,296,143,740 | I_kwDOCUB6oc7Edyl8 | 39,955 | Fix grammatically incorrect variable name "expert_hitted" → "expert_hit" in MoE implementation | {
"login": "Mihonarium",
"id": 24436954,
"node_id": "MDQ6VXNlcjI0NDM2OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/24436954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mihonarium",
"html_url": "https://github.com/Mihonarium",
"followers_url": "https://api.github.com/users/Mihonarium/followers",
"following_url": "https://api.github.com/users/Mihonarium/following{/other_user}",
"gists_url": "https://api.github.com/users/Mihonarium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mihonarium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mihonarium/subscriptions",
"organizations_url": "https://api.github.com/users/Mihonarium/orgs",
"repos_url": "https://api.github.com/users/Mihonarium/repos",
"events_url": "https://api.github.com/users/Mihonarium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mihonarium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T10:27:27 | 2025-08-06T15:45:21 | 2025-08-06T15:45:21 | CONTRIBUTOR | null | null | null | null | ## Description
The variable name `expert_hitted` used in the Mixture of Experts (MoE) implementations is grammatically incorrect. The past tense/past participle of "hit" is "hit", not "hitted".
## Current behavior
The codebase currently uses `expert_hitted` in:
- `src/transformers/models/gpt_oss/modular_gpt_oss.py`
- `src/transformers/models/mixtral/modular_mixtral.py`
- `src/transformers/models/ernie4_5_moe/modular_ernie4_5_moe.py`
- `src/transformers/models/qwen3_moe/modular_qwen3_moe.py`
As well as what I assume is generated code:
- `src/transformers/models/gpt_oss/modeling_gpt_oss.py`
- `src/transformers/models/mixtral/modeling_mixtral.py`
- `src/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py`
- `src/transformers/models/qwen3_moe/modeling_qwen3_moe.py`
- `src/transformers/models/qwen2_moe/modeling_qwen2_moe.py`
- `src/transformers/models/minimax/modeling_minimax.py`
It also uses `hitted_experts` in:
- `src/transformers/integrations/mxfp4.py`
## Proposed solution
Rename all instances of `expert_hitted` to `expert_hit` and `hitted_experts` to `hit_experts` to correct the grammar while maintaining the original intent of the variable name.
## Impact
- This is a non-breaking internal variable name change
- Improves code readability
- No API changes required
## Additional context
I'll try to submit a PR to fix this. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39955/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39954/comments | https://api.github.com/repos/huggingface/transformers/issues/39954/events | https://github.com/huggingface/transformers/issues/39954 | 3,296,049,185 | I_kwDOCUB6oc7Edbgh | 39,954 | [gpt‑oss] eager_attention_forward not using sliding-window attention for GPT‑OSS models | {
"login": "AlfredTino",
"id": 41940791,
"node_id": "MDQ6VXNlcjQxOTQwNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/41940791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlfredTino",
"html_url": "https://github.com/AlfredTino",
"followers_url": "https://api.github.com/users/AlfredTino/followers",
"following_url": "https://api.github.com/users/AlfredTino/following{/other_user}",
"gists_url": "https://api.github.com/users/AlfredTino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlfredTino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlfredTino/subscriptions",
"organizations_url": "https://api.github.com/users/AlfredTino/orgs",
"repos_url": "https://api.github.com/users/AlfredTino/repos",
"events_url": "https://api.github.com/users/AlfredTino/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlfredTino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T09:55:07 | 2025-08-07T02:28:56 | 2025-08-07T02:28:56 | NONE | null | null | null | null | In the latest transformers version v4.55.0, the GPT‑OSS model’s eager_attention_forward implementation does **not** use sliding‑window attention. This behavior diverges from the original GPT‑OSS specification, where alternating full‑context and sliding‑window attention (e.g. window size 128) is a key architectural feature—such as in the vLLM and model card descriptions that note gpt‑oss uses intermittent sliding‑window patterns for long‑context efficiency (alternating local and global attention).
Although eager attention backend may be rarely used in production, it should remain consistent with other attention backends (e.g. FlashAttention) and the original GPT‑OSS behavior.
| {
"login": "AlfredTino",
"id": 41940791,
"node_id": "MDQ6VXNlcjQxOTQwNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/41940791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlfredTino",
"html_url": "https://github.com/AlfredTino",
"followers_url": "https://api.github.com/users/AlfredTino/followers",
"following_url": "https://api.github.com/users/AlfredTino/following{/other_user}",
"gists_url": "https://api.github.com/users/AlfredTino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlfredTino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlfredTino/subscriptions",
"organizations_url": "https://api.github.com/users/AlfredTino/orgs",
"repos_url": "https://api.github.com/users/AlfredTino/repos",
"events_url": "https://api.github.com/users/AlfredTino/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlfredTino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39954/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39953/comments | https://api.github.com/repos/huggingface/transformers/issues/39953/events | https://github.com/huggingface/transformers/pull/39953 | 3,295,992,379 | PR_kwDOCUB6oc6iWUTR | 39,953 | Fix MXFP4 quantizer validation to allow CPU inference with dequantize option | {
"login": "returnL",
"id": 44701395,
"node_id": "MDQ6VXNlcjQ0NzAxMzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/44701395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/returnL",
"html_url": "https://github.com/returnL",
"followers_url": "https://api.github.com/users/returnL/followers",
"following_url": "https://api.github.com/users/returnL/following{/other_user}",
"gists_url": "https://api.github.com/users/returnL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/returnL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/returnL/subscriptions",
"organizations_url": "https://api.github.com/users/returnL/orgs",
"repos_url": "https://api.github.com/users/returnL/repos",
"events_url": "https://api.github.com/users/returnL/events{/privacy}",
"received_events_url": "https://api.github.com/users/returnL/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-06T09:35:54 | 2025-08-06T17:52:53 | 2025-08-06T13:20:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39953",
"html_url": "https://github.com/huggingface/transformers/pull/39953",
"diff_url": "https://github.com/huggingface/transformers/pull/39953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39953.patch",
"merged_at": "2025-08-06T13:20:41"
} | # What does this PR do?
This PR fixes a bug that prevented MXFP4 models from running on CPU when `quantization_config.dequantize=True` was set.
## Problem
The validation logic in `Mxfp4HfQuantizer` checked CUDA availability before checking the `dequantize` flag, causing failures on CPU-only environments even when dequantization was enabled.
## Solution
Reordered validation checks to prioritize `dequantize` configuration:
1. Check if `dequantize` is enabled - if yes, skip GPU validations
2. Only then check CUDA availability
## Changes Made
- **Fix**: Moved `dequantize` check before CUDA validation in `quantizer_mxfp4.py`
- **Tests**: Added test cases to verify CPU inference with `dequantize=True`
## Before submitting
- [x] Did you read the contributor guideline?
- [x] Did you write any new necessary tests?
## Who can review?
@SunMarc @MekkCyber | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39953/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39952/comments | https://api.github.com/repos/huggingface/transformers/issues/39952/events | https://github.com/huggingface/transformers/pull/39952 | 3,295,934,217 | PR_kwDOCUB6oc6iWHoM | 39,952 | [DO NOT MERGE] Testing safetensors 0.6.1rc0 | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T09:18:12 | 2025-08-06T12:23:15 | 2025-08-06T12:23:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39952",
"html_url": "https://github.com/huggingface/transformers/pull/39952",
"diff_url": "https://github.com/huggingface/transformers/pull/39952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39952.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39952/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39951/comments | https://api.github.com/repos/huggingface/transformers/issues/39951/events | https://github.com/huggingface/transformers/pull/39951 | 3,295,856,263 | PR_kwDOCUB6oc6iV2xA | 39,951 | circleci: pin torch 2.7.1 until `torchcodec` is updated | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T08:57:50 | 2025-08-06T09:18:02 | 2025-08-06T09:18:00 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39951",
"html_url": "https://github.com/huggingface/transformers/pull/39951",
"diff_url": "https://github.com/huggingface/transformers/pull/39951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39951.patch",
"merged_at": "2025-08-06T09:18:00"
} | # What does this PR do?
to make CircleCI ✅ | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39951/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39950/comments | https://api.github.com/repos/huggingface/transformers/issues/39950/events | https://github.com/huggingface/transformers/pull/39950 | 3,295,846,160 | PR_kwDOCUB6oc6iV0lH | 39,950 | Add pytest marker: `torch_compile_test` and `torch_export_test` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T08:55:01 | 2025-08-13T21:47:18 | 2025-08-13T21:47:16 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39950",
"html_url": "https://github.com/huggingface/transformers/pull/39950",
"diff_url": "https://github.com/huggingface/transformers/pull/39950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39950.patch",
"merged_at": "2025-08-13T21:47:16"
} | # What does this PR do?
The torch team considers to run some transformers tests on their side, especially the compile and export tests. They request us to provide a way to easily and reliably running those related tests.
Therefore this PR adds 2 new pytest marker and use them to the relevant tests.
A run using `torch_compile_test` is
https://github.com/huggingface/transformers/actions/runs/16751561312/job/47423145415
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39950/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39950/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39949/comments | https://api.github.com/repos/huggingface/transformers/issues/39949/events | https://github.com/huggingface/transformers/pull/39949 | 3,295,802,034 | PR_kwDOCUB6oc6iVq3b | 39,949 | Add pytest marker: `torch_compile_test` and `torch_export_test` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T08:43:22 | 2025-08-06T08:54:26 | 2025-08-06T08:53:55 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39949",
"html_url": "https://github.com/huggingface/transformers/pull/39949",
"diff_url": "https://github.com/huggingface/transformers/pull/39949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39949.patch",
"merged_at": null
} | # What does this PR do?
The torch team considers to run some transformers tests on their side, especially the compile and export tests. They request us to provide a way to easily and reliably running those related tests.
Therefore this PR adds 2 new pytest marker and use them to the relevant tests.
A run using `torch_compile_test` is
https://github.com/huggingface/transformers/actions/runs/16751561312/job/47423145415
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39949/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39948/comments | https://api.github.com/repos/huggingface/transformers/issues/39948/events | https://github.com/huggingface/transformers/pull/39948 | 3,295,521,206 | PR_kwDOCUB6oc6iUtIs | 39,948 | feat: Support tensor inputs in ImageClassificationPipeline | {
"login": "Hashbrownsss",
"id": 142291877,
"node_id": "U_kgDOCHszpQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142291877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hashbrownsss",
"html_url": "https://github.com/Hashbrownsss",
"followers_url": "https://api.github.com/users/Hashbrownsss/followers",
"following_url": "https://api.github.com/users/Hashbrownsss/following{/other_user}",
"gists_url": "https://api.github.com/users/Hashbrownsss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hashbrownsss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hashbrownsss/subscriptions",
"organizations_url": "https://api.github.com/users/Hashbrownsss/orgs",
"repos_url": "https://api.github.com/users/Hashbrownsss/repos",
"events_url": "https://api.github.com/users/Hashbrownsss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hashbrownsss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T07:28:05 | 2025-08-07T19:06:09 | 2025-08-07T19:06:09 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39948",
"html_url": "https://github.com/huggingface/transformers/pull/39948",
"diff_url": "https://github.com/huggingface/transformers/pull/39948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39948.patch",
"merged_at": null
} | # What does this PR do?
Adds support for accepting numpy arrays and pytorch tensors as direct inputs to the 'ImageClassificationPipeline'. Currently the pipelines 'preprocess' function only accepts PIL images or file paths. Made the pipeline flexible to use with existing datasets and data pipelines as requested in the issue
Fixes #39607
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (This was discussed in issue [#39607).](https://github.com/huggingface/transformers/issues/39607)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
@zucchini-nlp
| {
"login": "Hashbrownsss",
"id": 142291877,
"node_id": "U_kgDOCHszpQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142291877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hashbrownsss",
"html_url": "https://github.com/Hashbrownsss",
"followers_url": "https://api.github.com/users/Hashbrownsss/followers",
"following_url": "https://api.github.com/users/Hashbrownsss/following{/other_user}",
"gists_url": "https://api.github.com/users/Hashbrownsss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hashbrownsss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hashbrownsss/subscriptions",
"organizations_url": "https://api.github.com/users/Hashbrownsss/orgs",
"repos_url": "https://api.github.com/users/Hashbrownsss/repos",
"events_url": "https://api.github.com/users/Hashbrownsss/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hashbrownsss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39948/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39947/comments | https://api.github.com/repos/huggingface/transformers/issues/39947/events | https://github.com/huggingface/transformers/issues/39947 | 3,295,261,949 | I_kwDOCUB6oc7EabT9 | 39,947 | v4.55.0 Idefics3 RuntimeError Tensors on different devices | {
"login": "noahleegithub",
"id": 42154767,
"node_id": "MDQ6VXNlcjQyMTU0NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/42154767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noahleegithub",
"html_url": "https://github.com/noahleegithub",
"followers_url": "https://api.github.com/users/noahleegithub/followers",
"following_url": "https://api.github.com/users/noahleegithub/following{/other_user}",
"gists_url": "https://api.github.com/users/noahleegithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noahleegithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noahleegithub/subscriptions",
"organizations_url": "https://api.github.com/users/noahleegithub/orgs",
"repos_url": "https://api.github.com/users/noahleegithub/repos",
"events_url": "https://api.github.com/users/noahleegithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/noahleegithub/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-06T05:42:38 | 2025-08-07T09:12:05 | 2025-08-07T09:12:05 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Python version: 3.10.14
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Using GPU in script?: Yes
- GPU type: Tesla V100-PCIE-32GB
### Who can help?
@ArthurZucker @guangy10
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers.models.idefics3.modeling_idefics3 import Idefics3VisionEmbeddings
from transformers.models.idefics3.configuration_idefics3 import Idefics3VisionConfig
model = Idefics3VisionEmbeddings(Idefics3VisionConfig())
model = model.to("cuda")
pixel_values = torch.rand(1, 3, 32, 32, device="cuda")
patch_attention_mask = torch.rand(1, 1, 1, 1, device="cuda") > 0.5
output = model(pixel_values, patch_attention_mask)
```
Gives the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument boundaries in method wrapper_CUDA_Tensor_bucketize)
### Expected behavior
Should output a cuda Tensor.
It looks like this is caused by #39614 , lines 150 and 151.
```python
[143] boundaries = torch.arange(1 / self.num_patches_per_side, 1.0, 1 / self.num_patches_per_side)
[150] h_indices = torch.arange(nb_patches_h, device=pixel_values.device, dtype=pixel_values.dtype)
[151] w_indices = torch.arange(nb_patches_w, device=pixel_values.device, dtype=pixel_values.dtype)
[156] bucket_coords_h = torch.bucketize(fractional_coords_h, boundaries, right=True)
[157] bucket_coords_w = torch.bucketize(fractional_coords_w, boundaries, right=True)
```
`boundaries` is created on the CPU, while `h_indices` and `w_indices` are dependent on the input device causing a mismatch. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39947/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39947/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39946/comments | https://api.github.com/repos/huggingface/transformers/issues/39946/events | https://github.com/huggingface/transformers/issues/39946 | 3,295,159,627 | I_kwDOCUB6oc7EaCVL | 39,946 | Retaining computational graph after using AutoImageProcessor | {
"login": "YinniKun",
"id": 94827377,
"node_id": "U_kgDOBabzcQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94827377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YinniKun",
"html_url": "https://github.com/YinniKun",
"followers_url": "https://api.github.com/users/YinniKun/followers",
"following_url": "https://api.github.com/users/YinniKun/following{/other_user}",
"gists_url": "https://api.github.com/users/YinniKun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YinniKun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YinniKun/subscriptions",
"organizations_url": "https://api.github.com/users/YinniKun/orgs",
"repos_url": "https://api.github.com/users/YinniKun/repos",
"events_url": "https://api.github.com/users/YinniKun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YinniKun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-06T04:41:06 | 2025-08-07T22:07:09 | null | NONE | null | null | null | null | ### Feature request
Right now `AutoImageProcessor` would detach the tensor from any gradient, which is totally fine if the input is obtained from DataLoader and have no gradient anyway. But would it be possible to retain the computational graph after using the preprocessing of `AutoImageProcessor` (similar to `torchvision.transforms.v2`)?
### Motivation
I discovered this when I was trying to build a model that translates an image before using some other models to do downstream task. So something like this:
image -> my translator -> image processor for pre-trained model -> pre-trained model -> downstream
And I discovered that it is impossible to train end-to-end with the image processor in the middle because it would detach the gradient completely. So this feature would be helpful if we want to train something before using some existing models, we can let the gradient flow through during the training.
### Your contribution
I haven't looked too deeply into the source code of ImageProcessor, but I suppose I can draft up a PR if there is interest in implementing this. I think most transforms in `torchvision.transforms.v2` would not cut off the gradient, so theoretically implementing the transformations in `torchvision.transforms.v2` would allow this feature | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39946/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39945/comments | https://api.github.com/repos/huggingface/transformers/issues/39945/events | https://github.com/huggingface/transformers/issues/39945 | 3,295,151,100 | I_kwDOCUB6oc7EaAP8 | 39,945 | GPT-OSS mxfp4 with triton_kernel: make_default_matmul_mxfp4_w_layout not found | {
"login": "yilian49",
"id": 43861414,
"node_id": "MDQ6VXNlcjQzODYxNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/43861414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yilian49",
"html_url": "https://github.com/yilian49",
"followers_url": "https://api.github.com/users/yilian49/followers",
"following_url": "https://api.github.com/users/yilian49/following{/other_user}",
"gists_url": "https://api.github.com/users/yilian49/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yilian49/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yilian49/subscriptions",
"organizations_url": "https://api.github.com/users/yilian49/orgs",
"repos_url": "https://api.github.com/users/yilian49/repos",
"events_url": "https://api.github.com/users/yilian49/events{/privacy}",
"received_events_url": "https://api.github.com/users/yilian49/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-06T04:36:03 | 2025-09-18T08:02:16 | 2025-09-18T08:02:16 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tried to run the openai gpt-oss-120B model in mxfp4 on H100, following this setup command instruction as given by [this link](https://cookbook.openai.com/articles/gpt-oss/run-transformers#:~:text=pip%20install%20%2DU%20transformers%20accelerate%20torch%20triton%20kernels%0Bpip%20install%20git%2Bhttps%3A//github.com/triton%2Dlang/triton.git%40main%23subdirectory%3Dpython/triton_kernels)
`pip install -U transformers accelerate torch triton kernelspip install git+https://github.com/triton-lang/triton.git@main#subdirectory=python/triton_kernels`
I ran the script [provided here](https://cookbook.openai.com/articles/gpt-oss/run-transformers#:~:text=from%20transformers%20import%20AutoModelForCausalLM,decode(outputs%5B0%5D)))
(And I had to manually upgrade triton to 3.4.0)
The error message states:
`raceback (most recent call last):
File "/workspace/projects/gpt_oss/generate.py", line 6, in <module>
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 316, in _wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 5061, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 5524, in _load_pretrained_model
_error_msgs, disk_offload_index, cpu_offload_index = load_shard_file(args)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 974, in load_shard_file
disk_offload_index, cpu_offload_index = _load_state_dict_into_meta_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/modeling_utils.py", line 882, in _load_state_dict_into_meta_model
hf_quantizer.create_quantized_param(
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/quantizers/quantizer_mxfp4.py", line 223, in create_quantized_param
load_and_swizzle_mxfp4(
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/integrations/mxfp4.py", line 375, in load_and_swizzle_mxfp4
triton_weight_tensor, weight_scale = swizzle_mxfp4(
^^^^^^^^^^^^^^
File "/workspace/projects/trainnew/lib/python3.11/site-packages/transformers/integrations/mxfp4.py", line 64, in swizzle_mxfp4
value_layout, value_layout_opts = layout.make_default_matmul_mxfp4_w_layout(mx_axis=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'triton_kernels.tensor_details.layout' has no attribute 'make_default_matmul_mxfp4_w_layout'`
### Expected behavior
Expect the model to load and run | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39945/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39944/comments | https://api.github.com/repos/huggingface/transformers/issues/39944/events | https://github.com/huggingface/transformers/pull/39944 | 3,295,037,556 | PR_kwDOCUB6oc6iTH-v | 39,944 | Add back `_tp_plan` attribute | {
"login": "rishub-tamirisa",
"id": 87284850,
"node_id": "MDQ6VXNlcjg3Mjg0ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/87284850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishub-tamirisa",
"html_url": "https://github.com/rishub-tamirisa",
"followers_url": "https://api.github.com/users/rishub-tamirisa/followers",
"following_url": "https://api.github.com/users/rishub-tamirisa/following{/other_user}",
"gists_url": "https://api.github.com/users/rishub-tamirisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishub-tamirisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishub-tamirisa/subscriptions",
"organizations_url": "https://api.github.com/users/rishub-tamirisa/orgs",
"repos_url": "https://api.github.com/users/rishub-tamirisa/repos",
"events_url": "https://api.github.com/users/rishub-tamirisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishub-tamirisa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 2760822153,
"node_id": "MDU6TGFiZWwyNzYwODIyMTUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tensor%20Parallel",
"name": "Tensor Parallel",
"color": "1AD0A8",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-08-06T03:17:24 | 2025-08-20T13:29:56 | 2025-08-20T13:29:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39944",
"html_url": "https://github.com/huggingface/transformers/pull/39944",
"diff_url": "https://github.com/huggingface/transformers/pull/39944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39944.patch",
"merged_at": "2025-08-20T13:29:56"
} | Fixes #39943
- improvements to tensor parallel plan , validation, and extensibility: Added property-based getters and setters for `tp_plan` and `pp_plan` in the model class, including validation of parallel styles and layer pattern matching, with warnings for non-existent patterns. This ensures only supported parallelization styles are used and helps catch misconfigurations early.
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39944/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39943/comments | https://api.github.com/repos/huggingface/transformers/issues/39943/events | https://github.com/huggingface/transformers/issues/39943 | 3,295,029,310 | I_kwDOCUB6oc7EZig- | 39,943 | Breaking change in unset `_tp_plan` attribute | {
"login": "rishub-tamirisa",
"id": 87284850,
"node_id": "MDQ6VXNlcjg3Mjg0ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/87284850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishub-tamirisa",
"html_url": "https://github.com/rishub-tamirisa",
"followers_url": "https://api.github.com/users/rishub-tamirisa/followers",
"following_url": "https://api.github.com/users/rishub-tamirisa/following{/other_user}",
"gists_url": "https://api.github.com/users/rishub-tamirisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishub-tamirisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishub-tamirisa/subscriptions",
"organizations_url": "https://api.github.com/users/rishub-tamirisa/orgs",
"repos_url": "https://api.github.com/users/rishub-tamirisa/repos",
"events_url": "https://api.github.com/users/rishub-tamirisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishub-tamirisa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-06T03:11:25 | 2025-08-20T13:29:57 | 2025-08-20T13:29:57 | CONTRIBUTOR | null | null | null | null | The vLLM transformers frontend [relies on the `_tp_plan` attribute being set in the model](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/transformers.py#L543). It was removed [here](https://github.com/huggingface/transformers/pull/39501/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaL2255) in #39501, which breaks vLLM.
vLLM could update to use `model.config.base_model_tp_plan`, or this attribute could be added back.
### Reproduction
```
from vLLM import LLM
llm = LLM(model="meta-llama/Llama-3.1-8B-Instruct", model_impl="transformers", tensor_parallel_size=2)
```
### Expected behavior
The model should load as it normally would pre-#39501. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39943/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39943/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39942/comments | https://api.github.com/repos/huggingface/transformers/issues/39942/events | https://github.com/huggingface/transformers/pull/39942 | 3,294,885,425 | PR_kwDOCUB6oc6iSpGr | 39,942 | fix llama issue | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-06T01:33:26 | 2025-10-29T22:29:35 | 2025-10-23T21:31:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39942",
"html_url": "https://github.com/huggingface/transformers/pull/39942",
"diff_url": "https://github.com/huggingface/transformers/pull/39942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39942.patch",
"merged_at": null
} | similar like this PR https://github.com/huggingface/transformers/pull/39646, fix the same issue found while enabling llama lora finetuning across multiple card.
@SunMarc , pls help review, thx very much. | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39942/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39941/comments | https://api.github.com/repos/huggingface/transformers/issues/39941/events | https://github.com/huggingface/transformers/pull/39941 | 3,294,751,328 | PR_kwDOCUB6oc6iSNSi | 39,941 | fixing image_utils.py todo | {
"login": "skochar1",
"id": 60591774,
"node_id": "MDQ6VXNlcjYwNTkxNzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/60591774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skochar1",
"html_url": "https://github.com/skochar1",
"followers_url": "https://api.github.com/users/skochar1/followers",
"following_url": "https://api.github.com/users/skochar1/following{/other_user}",
"gists_url": "https://api.github.com/users/skochar1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skochar1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skochar1/subscriptions",
"organizations_url": "https://api.github.com/users/skochar1/orgs",
"repos_url": "https://api.github.com/users/skochar1/repos",
"events_url": "https://api.github.com/users/skochar1/events{/privacy}",
"received_events_url": "https://api.github.com/users/skochar1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-08-06T00:05:43 | 2025-08-06T10:02:44 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39941",
"html_url": "https://github.com/huggingface/transformers/pull/39941",
"diff_url": "https://github.com/huggingface/transformers/pull/39941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39941.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR resolves a longstanding TODO in src/transformers/image_utils.py by replacing the use of logger.warning with Python’s built-in warnings.warn for notifying users of unused or invalid parameters in image processor functions. This approach makes warnings more controllable, user-friendly, and consistent with Python best practices.
Specifically, this PR:
* Fixes a TODO comment by swapping logging for proper warning handling using warnings.warn(..., UserWarning, stacklevel=2).
* Adds/updates docstrings and type hints for the affected functions.
* Implements comprehensive tests in tests/utils/test_image_utils.py to cover all usage scenarios—including valid parameters, invalid parameters, and edge cases.
* Ensures warning messages are triggered and formatted as expected.
* Verifies correctness through syntax, imports, and end-to-end integration checks.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts @qubvel
## Failures
Failures unrelated to changes made by PR. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39941/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39940/comments | https://api.github.com/repos/huggingface/transformers/issues/39940/events | https://github.com/huggingface/transformers/pull/39940 | 3,294,737,530 | PR_kwDOCUB6oc6iSKdS | 39,940 | Enable gpt-oss mxfp4 on older hardware (sm75+) | {
"login": "matthewdouglas",
"id": 38992547,
"node_id": "MDQ6VXNlcjM4OTkyNTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/38992547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewdouglas",
"html_url": "https://github.com/matthewdouglas",
"followers_url": "https://api.github.com/users/matthewdouglas/followers",
"following_url": "https://api.github.com/users/matthewdouglas/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewdouglas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewdouglas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewdouglas/subscriptions",
"organizations_url": "https://api.github.com/users/matthewdouglas/orgs",
"repos_url": "https://api.github.com/users/matthewdouglas/repos",
"events_url": "https://api.github.com/users/matthewdouglas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewdouglas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-08-05T23:55:18 | 2025-08-06T17:53:33 | 2025-08-06T13:39:21 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39940",
"html_url": "https://github.com/huggingface/transformers/pull/39940",
"diff_url": "https://github.com/huggingface/transformers/pull/39940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39940.patch",
"merged_at": "2025-08-06T13:39:21"
} | # What does this PR do?
Currently the MXFP4 quantized version of gpt-oss models is restricted to newer GPUs (Hopper and Blackwell). This PR enables the MXFP4 version on Turing, Ampere, and Ada GPUs.
Tested with the gpt-oss-20b on RTX 4090 and T4.
If the user has the kernels installed but the hardware is too old, we'll fall back to dequantizing instead of raising an error.
There's additionally a fix included here for a device mismatch issue that occurred when running with `device_map="auto"` on 2x4090.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@SunMarc @MekkCyber | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39940/reactions",
"total_count": 10,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39940/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39939/comments | https://api.github.com/repos/huggingface/transformers/issues/39939/events | https://github.com/huggingface/transformers/issues/39939 | 3,294,735,910 | I_kwDOCUB6oc7EYa4m | 39,939 | AttributeError: 'BitsAndBytesConfig' object has no attribute 'get_loading_attributes' with transformers 4.55.0 | {
"login": "yukiharada1228",
"id": 117978472,
"node_id": "U_kgDOBwg1aA",
"avatar_url": "https://avatars.githubusercontent.com/u/117978472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiharada1228",
"html_url": "https://github.com/yukiharada1228",
"followers_url": "https://api.github.com/users/yukiharada1228/followers",
"following_url": "https://api.github.com/users/yukiharada1228/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiharada1228/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiharada1228/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiharada1228/subscriptions",
"organizations_url": "https://api.github.com/users/yukiharada1228/orgs",
"repos_url": "https://api.github.com/users/yukiharada1228/repos",
"events_url": "https://api.github.com/users/yukiharada1228/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiharada1228/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-05T23:54:22 | 2025-08-08T11:29:11 | 2025-08-08T11:04:59 | NONE | null | null | null | null | ### System Info
transformers version: 4.55.0
platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.31
python version: 3.10.13
PyTorch version: 2.1.0a0+32f93b1
TensorFlow version: N/A
Flax version: N/A
JAX version: N/A
JAXLib version: N/A
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
"openai/gpt-oss-20b",
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
### Expected behavior
Steps to reproduce the behavior:
1. Install transformers 4.55.0 and bitsandbytes 0.46.1
2. Run the above code snippet
3. AttributeError occurs during model loading
Error message:
Traceback (most recent call last):
File "/workspace/script/sft_train.py", line 173, in <module>
main()
File "/workspace/script/sft_train.py", line 101, in main
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 316, in _wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4865, in from_pretrained
config.quantization_config = AutoHfQuantizer.merge_quantization_configs(
File "/usr/local/lib/python3.10/dist-packages/transformers/quantizers/auto.py", line 224, in merge_quantization_configs
loading_attr_dict = quantization_config_from_args.get_loading_attributes()
AttributeError: 'BitsAndBytesConfig' object has no attribute 'get_loading_attributes' | {
"login": "yukiharada1228",
"id": 117978472,
"node_id": "U_kgDOBwg1aA",
"avatar_url": "https://avatars.githubusercontent.com/u/117978472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiharada1228",
"html_url": "https://github.com/yukiharada1228",
"followers_url": "https://api.github.com/users/yukiharada1228/followers",
"following_url": "https://api.github.com/users/yukiharada1228/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiharada1228/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiharada1228/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiharada1228/subscriptions",
"organizations_url": "https://api.github.com/users/yukiharada1228/orgs",
"repos_url": "https://api.github.com/users/yukiharada1228/repos",
"events_url": "https://api.github.com/users/yukiharada1228/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiharada1228/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39939/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39938/comments | https://api.github.com/repos/huggingface/transformers/issues/39938/events | https://github.com/huggingface/transformers/pull/39938 | 3,294,632,177 | PR_kwDOCUB6oc6iR0PJ | 39,938 | Fix whisper `return_language` with `return_timestamp=word` | {
"login": "Metric-Void",
"id": 21335640,
"node_id": "MDQ6VXNlcjIxMzM1NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/21335640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Metric-Void",
"html_url": "https://github.com/Metric-Void",
"followers_url": "https://api.github.com/users/Metric-Void/followers",
"following_url": "https://api.github.com/users/Metric-Void/following{/other_user}",
"gists_url": "https://api.github.com/users/Metric-Void/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Metric-Void/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Metric-Void/subscriptions",
"organizations_url": "https://api.github.com/users/Metric-Void/orgs",
"repos_url": "https://api.github.com/users/Metric-Void/repos",
"events_url": "https://api.github.com/users/Metric-Void/events{/privacy}",
"received_events_url": "https://api.github.com/users/Metric-Void/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
},
{
"id": 7377881103,
"node_id": "LA_kwDOCUB6oc8AAAABt8GIDw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Whisper",
"name": "Whisper",
"color": "83303E",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-08-05T22:42:52 | 2025-10-06T17:08:10 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39938",
"html_url": "https://github.com/huggingface/transformers/pull/39938",
"diff_url": "https://github.com/huggingface/transformers/pull/39938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39938.patch",
"merged_at": null
} | # What does this PR do?
Fixes #39404.
Add a switch to Whisper.generate() that allows preserving some special tokens, then stripped in retrieve_segments to ensure timestamp alignment.
Tested on short and long audios. Tested on English, French, and Cantonese. Prediction and timestamp results align, and language is detected correctly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@eustlb @ebezzam
## Local failed tests (WSL2, RUN_SLOW)
```
$ pytest tests/models/whisper
================================================================================================================= short test summary info ==================================================================================================================
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_flex_attention_with_grads - torch._inductor.exc.InductorError: LoweringException: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpw7mv95z8/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpw7mv95z8/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', ...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperModelTest::test_sdpa_can_compile_dynamic - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp2jbthzzq/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmp2jbthzzq/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperEncoderModelTest::test_flex_attention_with_grads - torch._inductor.exc.InductorError: LoweringException: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpbszajy61/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpbszajy61/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', ...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperEncoderModelTest::test_sdpa_can_compile_dynamic - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpevr_eml0/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpevr_eml0/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_generate_compilation_all_outputs - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpghb4htrw/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpghb4htrw/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_generate_compile_model_forward - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpb3fj6t8c/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpb3fj6t8c/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_generate_from_inputs_embeds_with_static_cache - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmp122w6v5o/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmp122w6v5o/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_generate_with_static_cache - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpee6hyznt/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpee6hyznt/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_sdpa_can_compile_dynamic - torch._inductor.exc.InductorError: CalledProcessError: Command '['/usr/bin/gcc', '/tmp/tmpbz2lnr80/main.c', '-O3', '-shared', '-fPIC', '-Wno-psabi', '-o', '/tmp/tmpbz2lnr80/cuda_utils.cpython-312-x86_64-linux-gnu.so', '-lcuda', '-L/home/metricvoid...
FAILED tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_padding_side_in_kwargs - ImportError:
FAILED tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_tokenizer_initialization_with_conflicting_key - ImportError:
FAILED tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_tokenizer_mismatch_warning - ImportError:
FAILED tests/models/whisper/test_tokenization_whisper.py::WhisperTokenizerTest::test_truncation_side_in_kwargs - ImportError:
=========================================================================================== 13 failed, 445 passed, 295 skipped, 36 warnings in 166.72s (0:02:46) ===========================================================================================
```
I don't think any of these failures are related to this PR. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39938/timeline | null | null | null | null | true | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.