url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/40840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40840/comments | https://api.github.com/repos/huggingface/transformers/issues/40840/events | https://github.com/huggingface/transformers/pull/40840 | 3,409,367,966 | PR_kwDOCUB6oc6oL8Eg | 40,840 | feat: add qwen2 pruning support | {
"login": "wangwenmingaa",
"id": 30922691,
"node_id": "MDQ6VXNlcjMwOTIyNjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30922691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangwenmingaa",
"html_url": "https://github.com/wangwenmingaa",
"followers_url": "https://api.github.com/users/wangwenmingaa/followers",
"following_url": "https://api.github.com/users/wangwenmingaa/following{/other_user}",
"gists_url": "https://api.github.com/users/wangwenmingaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangwenmingaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangwenmingaa/subscriptions",
"organizations_url": "https://api.github.com/users/wangwenmingaa/orgs",
"repos_url": "https://api.github.com/users/wangwenmingaa/repos",
"events_url": "https://api.github.com/users/wangwenmingaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangwenmingaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-12T08:06:34 | 2025-09-12T13:08:49 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40840",
"html_url": "https://github.com/huggingface/transformers/pull/40840",
"diff_url": "https://github.com/huggingface/transformers/pull/40840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40840.patch",
"merged_at": null
} | # What does this PR do?
This PR enables models with structured pruning to be loaded correctly. The model configuration files after structured pruning will be modified, as the layer_head_num and layer_inter_size parameters may vary across different layers of the network. When the Transformers library loads the configuration file, it must now map these parameters layer-by-layer. This PR specifically adjusts the Qwen2 configuration loading logic in Transformers to ensure compatibility with models that have undergone structured pruning.
## Before submitting
1. Verify that the model after structured pruning can be loaded correctly by the Transformers library.
2. Confirm that unpruned models can still be loaded normally by the Transformers library. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40840/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40839/comments | https://api.github.com/repos/huggingface/transformers/issues/40839/events | https://github.com/huggingface/transformers/pull/40839 | 3,409,296,262 | PR_kwDOCUB6oc6oLsqp | 40,839 | Simplify unnecessary Optional typing | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-12T07:42:31 | 2025-09-22T13:02:49 | 2025-09-22T12:57:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40839",
"html_url": "https://github.com/huggingface/transformers/pull/40839",
"diff_url": "https://github.com/huggingface/transformers/pull/40839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40839.patch",
"merged_at": "2025-09-22T12:57:51"
} | # What does this PR do?
Remove `Optional` in some typing because the default value is not `None` and the variable is not checked against `is {not} None`. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40839/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40838/comments | https://api.github.com/repos/huggingface/transformers/issues/40838/events | https://github.com/huggingface/transformers/pull/40838 | 3,409,161,402 | PR_kwDOCUB6oc6oLPOV | 40,838 | Clarify passing is_causal in sdpa_attention_paged_forward | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-12T07:02:25 | 2025-09-16T16:24:54 | 2025-09-15T11:51:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40838",
"html_url": "https://github.com/huggingface/transformers/pull/40838",
"diff_url": "https://github.com/huggingface/transformers/pull/40838.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40838.patch",
"merged_at": "2025-09-15T11:51:22"
} | # What does this PR do?
`is_causal` is not forwarded in `sdpa_attention_paged_forward`. If that is intentional, a comment should be added. | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40838/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40837/comments | https://api.github.com/repos/huggingface/transformers/issues/40837/events | https://github.com/huggingface/transformers/pull/40837 | 3,408,666,472 | PR_kwDOCUB6oc6oJlWh | 40,837 | Add Top-H decoding (entropy-bounded truncation) as a LogitsWarper for text generation | {
"login": "ErfanBaghaei",
"id": 219734999,
"node_id": "U_kgDODRjj1w",
"avatar_url": "https://avatars.githubusercontent.com/u/219734999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErfanBaghaei",
"html_url": "https://github.com/ErfanBaghaei",
"followers_url": "https://api.github.com/users/ErfanBaghaei/followers",
"following_url": "https://api.github.com/users/ErfanBaghaei/following{/other_user}",
"gists_url": "https://api.github.com/users/ErfanBaghaei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErfanBaghaei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErfanBaghaei/subscriptions",
"organizations_url": "https://api.github.com/users/ErfanBaghaei/orgs",
"repos_url": "https://api.github.com/users/ErfanBaghaei/repos",
"events_url": "https://api.github.com/users/ErfanBaghaei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErfanBaghaei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-12T03:06:27 | 2025-10-08T13:38:27 | 2025-10-08T13:37:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40837",
"html_url": "https://github.com/huggingface/transformers/pull/40837",
"diff_url": "https://github.com/huggingface/transformers/pull/40837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40837.patch",
"merged_at": "2025-10-08T13:37:52"
} |
### PR Description
**Add Top-H decoding (entropy-bounded truncation) as a LogitsWarper**
---
#### Summary
This PR adds **Top-H**, a new sampling method for text generation, as a `LogitsWarper` in 🤗 Transformers.
Top-H truncates the next-token distribution by greedily including the most probable tokens until the entropy of the renormalized subset `q` is bounded:
$$
H(q) \leq \alpha \cdot H(p)
$$
where `H(p)` is the entropy of the model’s original distribution and `α ∈ (0,1]` is the user-specified scale.
This provides an **adaptive trade-off between creativity and coherence**: when the model is confident, the subset remains tight; when the model is uncertain, the subset widens—improving diversity without collapsing into noise.
---
#### API
* **New generation config fields**:
* `top_h: Optional[float] = None`
* **New class**:
* `TopHLogitsWarper`
Usage with `GenerationConfig`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
tok = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B")
gen_cfg = GenerationConfig(do_sample=True, temperature=1.3, top_h=0.4)
out = model.generate(**tok("Once upon a time,", return_tensors="pt"), generation_config=gen_cfg, max_new_tokens=50)
print(tok.decode(out[0], skip_special_tokens=True))
```
---
#### Motivation & Results
* **Theoretical foundation**: Top-H is derived from *Entropy-Constrained Maximum Distortion (ECMD)* and its dual *Entropy-Constrained Maximum Mass (ECMM)*. The exact solution is NP-hard; Top-H is a greedy approximation with early termination.
* **Empirical results** (see [paper](https://arxiv.org/abs/2509.02510)):
* On GSM8K at temperature `T=2`, Top-H improves accuracy by up to **25.63% over min-p** while preserving fluency.
* On creative writing tasks, Top-H improves diversity without incoherence.
* Runtime overhead is negligible (**\~0.8%** in benchmarks).
* **Design criteria**: Fully contained in a logits warper, no long-term maintenance burden. ✔️
---
#### Implementation Notes
* Vectorized and numerically stable:
* Uses cumulative sums of `p` and `p log p` for efficiency (O(V log V), same as top-p).
* Compatible with all other processors/warpers.
---
#### Tests
Added in `tests/generation/test_logits_process.py`:
* In highly peaked distributions, small alpha keeps only the top-1 token.
* Handles zero probability tokens.
* No-op when `α=1.0`.
---
#### References
* *Top-H Decoding: Adapting the Creativity and Coherence with Bounded Entropy in Text Generation*
ArXiv: [2509.02510](https://arxiv.org/abs/2509.02510)
* Original implementation: [github.com/ErfanBaghaei/Top-H-Decoding](https://github.com/ErfanBaghaei/Top-H-Decoding)
---
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40837/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40836/comments | https://api.github.com/repos/huggingface/transformers/issues/40836/events | https://github.com/huggingface/transformers/pull/40836 | 3,408,232,752 | PR_kwDOCUB6oc6oIH8M | 40,836 | Preetham | {
"login": "preethamyerramsetty",
"id": 135053952,
"node_id": "U_kgDOCAzCgA",
"avatar_url": "https://avatars.githubusercontent.com/u/135053952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/preethamyerramsetty",
"html_url": "https://github.com/preethamyerramsetty",
"followers_url": "https://api.github.com/users/preethamyerramsetty/followers",
"following_url": "https://api.github.com/users/preethamyerramsetty/following{/other_user}",
"gists_url": "https://api.github.com/users/preethamyerramsetty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/preethamyerramsetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/preethamyerramsetty/subscriptions",
"organizations_url": "https://api.github.com/users/preethamyerramsetty/orgs",
"repos_url": "https://api.github.com/users/preethamyerramsetty/repos",
"events_url": "https://api.github.com/users/preethamyerramsetty/events{/privacy}",
"received_events_url": "https://api.github.com/users/preethamyerramsetty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T22:45:31 | 2025-09-12T07:17:39 | 2025-09-12T07:17:38 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40836",
"html_url": "https://github.com/huggingface/transformers/pull/40836",
"diff_url": "https://github.com/huggingface/transformers/pull/40836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40836.patch",
"merged_at": null
} | # What does this PR do?
Adding my name to contributor list
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40836/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40835/comments | https://api.github.com/repos/huggingface/transformers/issues/40835/events | https://github.com/huggingface/transformers/issues/40835 | 3,408,041,994 | I_kwDOCUB6oc7LIpgK | 40,835 | `generate_batch` failing | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-09-11T21:09:05 | 2025-10-15T19:23:06 | 2025-10-15T19:23:06 | MEMBER | null | null | null | null | ### System Info
- `transformers` version: 4.57.0.dev0 (cf084f5b40e19b5a5f946cee75bead6d4247b071)
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31
- Python version: 3.12.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.17.4
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@remi-or @McPatate
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM, GenerationConfig
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct", device_map="cuda")
input_ids = [
[29038, 787, 4103],
[29038, 787, 4103],
]
model.generate_batch(input_ids, generation_config=GenerationConfig())
```
```
`eos_token_id` not set in GenerationConfig. Setting to -1 (disabled).
num_blocks = 42646 is too large, setting to self._upper_bound_num_blocks = 4096
max_batch_tokens = 13646 is too large, setting to self._upper_bound_max_batch_tokens = 256
PagedAttentionCache initialized with self.num_blocks = 4096, self.block_size = 32, page_size = 128, self.max_batch_tokens = 256 num_attention_masks = 1
self.cache_shape = (131073, 2, 64) self.key_cache[0].shape = torch.Size([131073, 2, 64]) self.key_cache[0].numel() = 16777344
Error in generation loop: CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 686, in warmup
self._generation_step(batch_processor)
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 692, in _generation_step
batch_data = batch_processor.get_model_kwargs()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 239, in get_model_kwargs
kwargs["attention_mask"][layer_type] = self.attention_mask[:1, :, :t, : seqlens_k[-1]]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: operation not permitted when stream is capturing
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 781, in _run_generation_loop
self._inner_generation_loop(batch_processor)
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 801, in _inner_generation_loop
self.warmup(batch_processor)
File "/fsx/qgallouedec/transformers/src/transformers/generation/continuous_batching/continuous_api.py", line 685, in warmup
with torch.cuda.graph(self.graph, stream=stream):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/torch/cuda/graphs.py", line 186, in __exit__
self.cuda_graph.capture_end()
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/torch/cuda/graphs.py", line 84, in capture_end
super().capture_end()
RuntimeError: CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Generation loop finished.
Generation thread terminated unexpectedly.
Solving 2 requests: 0%| | 0/2 [00:01<?, ?request/s]
```
The bug was introduced in #40688
I was able to track back the issue to this specific commit https://github.com/remi-or/transformers/commit/0fe77e8617f99d5c2d653ae8180223a36d99aead
### Expected behavior
To work | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40835/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40834/comments | https://api.github.com/repos/huggingface/transformers/issues/40834/events | https://github.com/huggingface/transformers/pull/40834 | 3,407,861,747 | PR_kwDOCUB6oc6oG4DZ | 40,834 | Consistent naming for images kwargs | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T19:54:42 | 2025-09-17T17:49:31 | 2025-09-17T16:40:25 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40834",
"html_url": "https://github.com/huggingface/transformers/pull/40834",
"diff_url": "https://github.com/huggingface/transformers/pull/40834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40834.patch",
"merged_at": "2025-09-17T16:40:25"
} | # What does this PR do?
it lays ground for my next PR, where I am aiming to unify `DefaultImagesKwargsFastInit`and `ImagesKwargs` since they are almost identical except for three kwargs (size_divisor, do_pad and pad_size)
This PR consolidates naming of the above there kwargs. Specifically:
- `size_divisor` is removed from general kwargs as it's used only in ~5 model
- `pad_size` and `do_pad` are added in default set of fast image processor kwargs along with a default `self.pad` method
- A few deprecations added where the naming was not consistent, e.g. `size_divisibility` instead of `size_divisor`
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40834/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40833/comments | https://api.github.com/repos/huggingface/transformers/issues/40833/events | https://github.com/huggingface/transformers/issues/40833 | 3,407,819,340 | I_kwDOCUB6oc7LHzJM | 40,833 | Generate attends to padded tokens when using left-padded (KV) batches | {
"login": "giulio98",
"id": 79860892,
"node_id": "MDQ6VXNlcjc5ODYwODky",
"avatar_url": "https://avatars.githubusercontent.com/u/79860892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giulio98",
"html_url": "https://github.com/giulio98",
"followers_url": "https://api.github.com/users/giulio98/followers",
"following_url": "https://api.github.com/users/giulio98/following{/other_user}",
"gists_url": "https://api.github.com/users/giulio98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giulio98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giulio98/subscriptions",
"organizations_url": "https://api.github.com/users/giulio98/orgs",
"repos_url": "https://api.github.com/users/giulio98/repos",
"events_url": "https://api.github.com/users/giulio98/events{/privacy}",
"received_events_url": "https://api.github.com/users/giulio98/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-11T19:37:32 | 2025-09-16T13:00:51 | 2025-09-15T15:02:33 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.56.1
- Platform: Linux-6.6.83-amd64-x86_64-with-glibc2.35
- Python version: 3.9.23
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0,1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@gante @zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Minimal Reproducible Example
```python
import copy
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", attn_implementation="eager", dtype=torch.float32).cuda().eval()
sentences = ["short sentence", "a much much longer sentence that forces left padding."]
question = "\n\nQ: Is the sentence short or long? A:"
# (A) vanilla generation with batch_size = 1
tokenized_sentence = tokenizer(sentences[0], return_tensors="pt").to(model.device) # tokenize just the first sentence
tokenized_question = tokenizer(question, return_tensors="pt", add_special_tokens=False).to(model.device) # tokenize the question
new_input_ids = torch.cat([tokenized_sentence["input_ids"], tokenized_question["input_ids"]], dim=1)
new_attention_mask = torch.cat([tokenized_sentence["attention_mask"], tokenized_question["attention_mask"]], dim=1)
new_inputs = {"input_ids": new_input_ids, "attention_mask": new_attention_mask}
with torch.inference_mode():
outputs = model.generate(**new_inputs, max_new_tokens=8, top_p=1.0, do_sample=False, pad_token_id=tokenizer.pad_token_id)
generated_ids = outputs[:, new_inputs["input_ids"].shape[1]:, ...]
generated_A = generated_ids[0]
print("------------------------------")
print("(A) Vanilla generation with batch_size = 1")
print("Batch_item 0:", tokenizer.decode(generated_A)) # correct
print("------------------------------")
# (B) vanilla generation with batch_size > 1
tokenized_sentence = tokenizer(sentences, padding=True, return_tensors="pt").to(model.device) # tokenize all sentences
B = tokenized_sentence["input_ids"].shape[0]
tokenized_question = tokenizer([question] * B, return_tensors="pt", add_special_tokens=False).to(model.device) # tokenize the question replicate per batch
new_input_ids = torch.cat([tokenized_sentence["input_ids"], tokenized_question["input_ids"]], dim=1)
new_attention_mask = torch.cat([tokenized_sentence["attention_mask"], tokenized_question["attention_mask"]], dim=1)
new_inputs = {"input_ids": new_input_ids, "attention_mask": new_attention_mask}
with torch.inference_mode():
outputs = model.generate(**new_inputs, max_new_tokens=8, top_p=1.0, do_sample=False, pad_token_id=tokenizer.pad_token_id)
generated_ids = outputs[:, new_inputs["input_ids"].shape[1]:, ...]
print("------------------------------")
print("(B) Vanilla generation with batch_size > 1")
generated_B_0 = generated_ids[0]
print("Batch_item 0:", tokenizer.decode(generated_B_0)) # correct
generated_B_1 = generated_ids[1]
print("Batch_item 1:", tokenizer.decode(generated_B_1)) # correct
print("------------------------------")
# (C) prefill + continue generation with batch_size = 1
tokenized_sentence = tokenizer(sentences[0], return_tensors="pt").to(model.device) # tokenize just the first sentence
with torch.inference_mode():
sentence_cache = model(**tokenized_sentence, use_cache=True).past_key_values
tokenized_question = tokenizer(question, return_tensors="pt", add_special_tokens=False).to(model.device) # tokenize the question
new_input_ids = torch.cat([tokenized_sentence["input_ids"], tokenized_question["input_ids"]], dim=1)
new_attention_mask = torch.cat([tokenized_sentence["attention_mask"], tokenized_question["attention_mask"]], dim=1)
new_inputs = {"input_ids": new_input_ids, "attention_mask": new_attention_mask}
past_key_values = copy.deepcopy(sentence_cache)
with torch.inference_mode():
outputs = model.generate(**new_inputs, past_key_values=past_key_values, max_new_tokens=8, top_p=1.0, do_sample=False, pad_token_id=tokenizer.pad_token_id)
generated_ids = outputs[:, new_inputs["input_ids"].shape[1]:, ...]
generated_C = generated_ids[0]
print("------------------------------")
print("(C) Prefill + continue generation generation with batch_size = 1")
print("Batch_item 0:", tokenizer.decode(generated_C)) # correct
print("------------------------------")
# (D) prefill + continue generation with batch_size > 1
tokenized_sentence = tokenizer(sentences, padding=True, return_tensors="pt").to(model.device) # tokenize all sentences
with torch.inference_mode():
sentence_cache = model(**tokenized_sentence, use_cache=True).past_key_values
B = tokenized_sentence["input_ids"].shape[0]
tokenized_question = tokenizer([question] * B, return_tensors="pt", add_special_tokens=False).to(model.device) # tokenize the question replicate per batch
new_input_ids = torch.cat([tokenized_sentence["input_ids"], tokenized_question["input_ids"]], dim=1)
new_attention_mask = torch.cat([tokenized_sentence["attention_mask"], tokenized_question["attention_mask"]], dim=1)
new_inputs = {"input_ids": new_input_ids, "attention_mask": new_attention_mask}
past_key_values = copy.deepcopy(sentence_cache)
with torch.inference_mode():
outputs = model.generate(**new_inputs, past_key_values=past_key_values, max_new_tokens=8, top_p=1.0, do_sample=False, pad_token_id=tokenizer.pad_token_id)
generated_ids = outputs[:, new_inputs["input_ids"].shape[1]:, ...]
print("------------------------------")
print("(D) Prefill + continue generation generation with batch_size > 1")
generated_D_0 = generated_ids[0]
print("Batch_item 0:", tokenizer.decode(generated_D_0)) # <-- wrong: it has padded tokens and aren't handled correctly in generate
generated_D_1 = generated_ids[1]
print("Batch_item 1:", tokenizer.decode(generated_D_1)) # correct because it doesn't have padded tokens
print("------------------------------")
```
### Expected behavior
All four settings (A), (B), (C), (D) should be identical.
However, for setting (D), this holds **only** when the batch contains no padded tokens. If padding is present, the generation diverges because the model ends up attending to the padded tokens. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40833/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/40833/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40832/comments | https://api.github.com/repos/huggingface/transformers/issues/40832/events | https://github.com/huggingface/transformers/pull/40832 | 3,407,809,931 | PR_kwDOCUB6oc6oGsxY | 40,832 | [Llama4] Remove `image_sizes` arg and deprecate `vision_feature_layer` | {
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T19:33:24 | 2025-09-17T09:14:50 | 2025-09-17T09:14:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40832",
"html_url": "https://github.com/huggingface/transformers/pull/40832",
"diff_url": "https://github.com/huggingface/transformers/pull/40832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40832.patch",
"merged_at": "2025-09-17T09:14:13"
} | As per the titles, simply removes image_size arg from Llama4's forward method as it's unused and deprecates vision_feature_layer arg.
@ArthurZucker for review | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40832/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40831/comments | https://api.github.com/repos/huggingface/transformers/issues/40831/events | https://github.com/huggingface/transformers/pull/40831 | 3,407,732,350 | PR_kwDOCUB6oc6oGb-t | 40,831 | Lower logging level CB | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T19:02:52 | 2025-09-11T21:42:18 | 2025-09-11T21:42:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40831",
"html_url": "https://github.com/huggingface/transformers/pull/40831",
"diff_url": "https://github.com/huggingface/transformers/pull/40831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40831.patch",
"merged_at": null
} | I feel like all these messages are useful for debugging purposes but for actual usage, it's probably not relevant. | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40831/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40830/comments | https://api.github.com/repos/huggingface/transformers/issues/40830/events | https://github.com/huggingface/transformers/issues/40830 | 3,407,645,485 | I_kwDOCUB6oc7LHIst | 40,830 | `pipeline` support for backends like `vLLM` and `SGLang` | {
"login": "MRiabov",
"id": 108194191,
"node_id": "U_kgDOBnLpjw",
"avatar_url": "https://avatars.githubusercontent.com/u/108194191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MRiabov",
"html_url": "https://github.com/MRiabov",
"followers_url": "https://api.github.com/users/MRiabov/followers",
"following_url": "https://api.github.com/users/MRiabov/following{/other_user}",
"gists_url": "https://api.github.com/users/MRiabov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MRiabov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MRiabov/subscriptions",
"organizations_url": "https://api.github.com/users/MRiabov/orgs",
"repos_url": "https://api.github.com/users/MRiabov/repos",
"events_url": "https://api.github.com/users/MRiabov/events{/privacy}",
"received_events_url": "https://api.github.com/users/MRiabov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-11T18:30:13 | 2025-09-13T09:18:11 | null | NONE | null | null | null | null | ### Feature request
Hello,
I'm trying to run an offline inference over a dataset with 170k rows, each 100-500 tokens. That's sizeable, at least for me.
`transformers` seems to offer no support for batch inferencing. The current way `pipeline` when iterated over a `dataset` is by doing a for loop - computing one query after another, and that's slow.
### Motivation
Speed. I want to work on large datasets. My current dataset will grow tenfold. And I have more to process. But I can't set up a proper offline inference for it!
### Your contribution
I'd say my current problem can be resolved by a `async` `for` loop with sending requests to a bare `vLLM` server. But this feels suboptimal? Why can't I do this python-to-python? | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40830/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40829/comments | https://api.github.com/repos/huggingface/transformers/issues/40829/events | https://github.com/huggingface/transformers/issues/40829 | 3,407,572,481 | I_kwDOCUB6oc7LG24B | 40,829 | Cannot load config for Qwen/Qwen2.5-32B-Instruct | {
"login": "qiaoruiyt",
"id": 15666910,
"node_id": "MDQ6VXNlcjE1NjY2OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/15666910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiaoruiyt",
"html_url": "https://github.com/qiaoruiyt",
"followers_url": "https://api.github.com/users/qiaoruiyt/followers",
"following_url": "https://api.github.com/users/qiaoruiyt/following{/other_user}",
"gists_url": "https://api.github.com/users/qiaoruiyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiaoruiyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiaoruiyt/subscriptions",
"organizations_url": "https://api.github.com/users/qiaoruiyt/orgs",
"repos_url": "https://api.github.com/users/qiaoruiyt/repos",
"events_url": "https://api.github.com/users/qiaoruiyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiaoruiyt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-11T18:04:22 | 2025-09-11T18:20:29 | 2025-09-11T18:20:29 | NONE | null | null | null | null | ### System Info
I suddenly cannot run `Qwen/Qwen2.5-32B-Instruct` model locally! It happened all of a sudden.
Debugging gives:
```
(Pdb) PretrainedConfig.get_config_dict('Qwen/Qwen2.5-32B-Instruct')
({}, {})
```
AutoConfig.from_pretrained gives:
```
ValueError: Unrecognized model in Qwen/Qwen2.5-32B-Instruct. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, time_series_transformer, timesfm, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, vjepa2, voxtral, voxtral_encoder, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xcodec, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xlstm, xmod, yolos, yoso, zamba, zamba2, zoedepth
```
However, I am able to load it by supplying the actual local path in '~/.cache/huggingface/hub'. I am also able to load other versions of Qwen2.5.
Changing transformers version and reinstallation both didn't help :(.
What happened!?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
PretrainedConfig.get_config_dict('Qwen/Qwen2.5-32B-Instruct')
### Expected behavior
Returns an empty dictionary. | {
"login": "qiaoruiyt",
"id": 15666910,
"node_id": "MDQ6VXNlcjE1NjY2OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/15666910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiaoruiyt",
"html_url": "https://github.com/qiaoruiyt",
"followers_url": "https://api.github.com/users/qiaoruiyt/followers",
"following_url": "https://api.github.com/users/qiaoruiyt/following{/other_user}",
"gists_url": "https://api.github.com/users/qiaoruiyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiaoruiyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiaoruiyt/subscriptions",
"organizations_url": "https://api.github.com/users/qiaoruiyt/orgs",
"repos_url": "https://api.github.com/users/qiaoruiyt/repos",
"events_url": "https://api.github.com/users/qiaoruiyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiaoruiyt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40829/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40828/comments | https://api.github.com/repos/huggingface/transformers/issues/40828/events | https://github.com/huggingface/transformers/pull/40828 | 3,407,335,223 | PR_kwDOCUB6oc6oFFaT | 40,828 | Fixes for continuous batching | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T16:46:55 | 2025-09-12T14:45:57 | 2025-09-12T13:35:32 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40828",
"html_url": "https://github.com/huggingface/transformers/pull/40828",
"diff_url": "https://github.com/huggingface/transformers/pull/40828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40828.patch",
"merged_at": "2025-09-12T13:35:32"
} | Some architectures like `llama` alter the attention mask if it is not a tensor, which was not compatible with the way CB created and handled the attention mask. Now, arguments like `attention_mask`, `cumulative_seqlens_k` and `max_seqlen_k` are tensors or ints unless the model is hybrid, in which case they are dictonnaries. This is the main fix, but PR also:
- cleans up how those arguments are built, reset, and delivered to the model to make things tidier
- adds support for attention sink in `eager_paged`
- explicitly disables (for now) cuda graphs and sampling in CB
- adds test for CB that pass on MI325 and H100 (but it's a little sketchy for some models there)
- fixes some telemetry metrics that were broken when hybrid allocation were added | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40828/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40828/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40827/comments | https://api.github.com/repos/huggingface/transformers/issues/40827/events | https://github.com/huggingface/transformers/issues/40827 | 3,407,287,871 | I_kwDOCUB6oc7LFxY_ | 40,827 | [`Cache`] Proper support for Linear Attention (related) caches | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-09-11T16:31:11 | 2025-10-29T10:02:11 | null | CONTRIBUTOR | {
"id": 5746498,
"node_id": "IT_kwDOAYh3p84AV69C",
"name": "Feature",
"description": "A request, idea, or new functionality",
"color": "blue",
"created_at": "2024-01-30T10:06:08",
"updated_at": "2024-10-08T12:18:58",
"is_enabled": true
} | null | null | null | Linear Attention covers many flavors that only came up in recent years. The most famous one is Mamba(2) - which was initially often seen/described as (selective) SSM. However, Qwen3 Next includes `Gated Delta Net`, another different flavor.
I expect more and more to follow suite and our current way of handling this, is ignoring everything and rewrite the Cache... Imo less than ideal especially since these caches are static (no dynamic growth or anything), they should be easier to handle than kv caches.
My suggestion proposal/suggestion would be to introduce a layer type based cache for linear attention similar to swa / dynamic / static. Going for full integration.
cc @ArthurZucker @Cyrilvallez @LysandreJik @gante | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40827/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40827/timeline | null | reopened | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40826/comments | https://api.github.com/repos/huggingface/transformers/issues/40826/events | https://github.com/huggingface/transformers/pull/40826 | 3,407,252,623 | PR_kwDOCUB6oc6oEzFg | 40,826 | fix florence kwargs | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T16:21:11 | 2025-09-15T09:05:50 | 2025-09-15T09:05:48 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40826",
"html_url": "https://github.com/huggingface/transformers/pull/40826",
"diff_url": "https://github.com/huggingface/transformers/pull/40826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40826.patch",
"merged_at": "2025-09-15T09:05:47"
} | # What does this PR do?
This PR fixes Florence 2 model kwargs issue with `num_items_per_batch`. Since BART decoder don't take any kwargs, we shouldn't pass them in forward, otherwise we get an error.
`TypeError: BartDecoder.forward() got an unexpected keyword argument 'num_items_in_batch'`
cc @merveenoyan | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40826/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40826/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40825/comments | https://api.github.com/repos/huggingface/transformers/issues/40825/events | https://github.com/huggingface/transformers/pull/40825 | 3,407,209,684 | PR_kwDOCUB6oc6oEpjX | 40,825 | Fix Florence kwargs | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T16:08:10 | 2025-09-11T16:20:59 | 2025-09-11T16:20:59 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40825",
"html_url": "https://github.com/huggingface/transformers/pull/40825",
"diff_url": "https://github.com/huggingface/transformers/pull/40825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40825.patch",
"merged_at": null
} | # What does this PR do?
This PR fixes Florence 2 model kwargs issue with `num_items_per_batch`. Since BART decoder don't take any kwargs, we shouldn't pass them in forward, otherwise we get an error.
`TypeError: BartDecoder.forward() got an unexpected keyword argument 'num_items_in_batch'`
cc @merveenoyan | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40825/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40824/comments | https://api.github.com/repos/huggingface/transformers/issues/40824/events | https://github.com/huggingface/transformers/pull/40824 | 3,406,952,402 | PR_kwDOCUB6oc6oDwv8 | 40,824 | Fix getter regression | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-09-11T14:58:48 | 2025-09-16T08:57:16 | 2025-09-16T08:57:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40824",
"html_url": "https://github.com/huggingface/transformers/pull/40824",
"diff_url": "https://github.com/huggingface/transformers/pull/40824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40824.patch",
"merged_at": "2025-09-16T08:57:13"
} | # What does this PR do?
Should fix https://github.com/huggingface/transformers/issues/40815
- [x] Fix `return None` getter regression
- [x] Added tests for get_decoder weird behaviour in general (to be safe)
- [ ] Looking into the twin setter method as well | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40824/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40823/comments | https://api.github.com/repos/huggingface/transformers/issues/40823/events | https://github.com/huggingface/transformers/pull/40823 | 3,406,942,368 | PR_kwDOCUB6oc6oDuhU | 40,823 | Fix trainer tests | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T14:56:35 | 2025-09-17T16:05:18 | 2025-09-17T16:05:17 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40823",
"html_url": "https://github.com/huggingface/transformers/pull/40823",
"diff_url": "https://github.com/huggingface/transformers/pull/40823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40823.patch",
"merged_at": "2025-09-17T16:05:17"
} | # What does this PR do?
This PR fixes Trainer remaining failing tests:
- sigopt is archived, so no need to put in integration
- fix hp api
- liger failing test | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40823/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40822/comments | https://api.github.com/repos/huggingface/transformers/issues/40822/events | https://github.com/huggingface/transformers/issues/40822 | 3,406,911,750 | I_kwDOCUB6oc7LEVkG | 40,822 | Welcome v5 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] | open | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
},
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
},
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-09-11T14:49:29 | 2025-10-25T03:41:49 | null | MEMBER | null | null | null | null | In this issue we share our plan for the upcoming version 5 of transformers. We've talked about version 5 for years and it's finally around the corner! We'll release a blog post announcing the focus of this release shortly, and wanted to share what we believe the process will look like over the coming weeks.
- Soon, a new branch named `v4` will be created on the repository. It is from this branch that all v4-related updates will take place. Going forward, `main` will act as the version 5 branch.
- For the next few weeks, every PR except breaking changes or significant refactors will be merged in both `main` and `v4`.
- In a few weeks, we release what will likely be one of the last minor v4 releases (`v4.57.0`)
- A few weeks later, we will release `v5`. We will aim to limit, as much as possible, the breaking changes within that release; but expect a migration guide as well as some specific breaking changes enabling much more versatile, performant, and cleaner code going forward.
- Over the next few months, we'll continue patching the `v4` branch and will release patch updates.
The v5 on pypi will be preceded by RC releases that we will share in this issue. Please subscribe to this issue to be updated, and let us know if you have thoughts about the outlined process above. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40822/reactions",
"total_count": 77,
"+1": 9,
"-1": 0,
"laugh": 0,
"hooray": 59,
"confused": 0,
"heart": 5,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40822/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40821/comments | https://api.github.com/repos/huggingface/transformers/issues/40821/events | https://github.com/huggingface/transformers/pull/40821 | 3,406,825,623 | PR_kwDOCUB6oc6oDUxX | 40,821 | draft removal of special and added | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-09-11T14:28:03 | 2025-10-16T09:37:35 | null | COLLABORATOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40821",
"html_url": "https://github.com/huggingface/transformers/pull/40821",
"diff_url": "https://github.com/huggingface/transformers/pull/40821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40821.patch",
"merged_at": null
} | # What does this PR do?
For v5! Long due cleanup | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40821/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40820/comments | https://api.github.com/repos/huggingface/transformers/issues/40820/events | https://github.com/huggingface/transformers/pull/40820 | 3,406,739,682 | PR_kwDOCUB6oc6oDBx5 | 40,820 | Add models to benchmarks | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-11T14:07:36 | 2025-09-21T16:18:38 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40820",
"html_url": "https://github.com/huggingface/transformers/pull/40820",
"diff_url": "https://github.com/huggingface/transformers/pull/40820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40820.patch",
"merged_at": null
} | # What does this PR do?
This PR adds bert, gemma3, gpt, mistral3 and qwen2 to the new benchmarking pipeline.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40820/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40819/comments | https://api.github.com/repos/huggingface/transformers/issues/40819/events | https://github.com/huggingface/transformers/pull/40819 | 3,406,663,817 | PR_kwDOCUB6oc6oCxB- | 40,819 | [`Jetmoe`] Fix RoPE | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-09-11T13:49:57 | 2025-09-12T09:07:04 | 2025-09-11T16:41:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40819",
"html_url": "https://github.com/huggingface/transformers/pull/40819",
"diff_url": "https://github.com/huggingface/transformers/pull/40819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40819.patch",
"merged_at": "2025-09-11T16:41:11"
} | Seems like the integration tests on our CI also died for a while.
Fixes the rope dimension for jetmoe by setting a respective attribute mapping - the normal calculation is not valid, ie `hidden_dim / num_attn_heads`. For reference, why this is valid see https://github.com/huggingface/transformers/blob/895b3ebe418ebcf6a37fc838ff0effdb69d98386/src/transformers/models/jetmoe/modeling_jetmoe.py#L494
There could be better solutions but not sure if it's worth the effort.
Fixes #40817
cc @gante @ArthurZucker | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40819/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40818/comments | https://api.github.com/repos/huggingface/transformers/issues/40818/events | https://github.com/huggingface/transformers/pull/40818 | 3,405,876,739 | PR_kwDOCUB6oc6oABSE | 40,818 | Fix TrainingArguments.parallelism_config NameError with accelerate<1.10.1 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T10:24:12 | 2025-09-14T15:35:43 | 2025-09-14T15:35:43 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40818",
"html_url": "https://github.com/huggingface/transformers/pull/40818",
"diff_url": "https://github.com/huggingface/transformers/pull/40818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40818.patch",
"merged_at": "2025-09-14T15:35:43"
} | Fix `TrainingArguments.parallelism_config` NameError with accelerate<1.10.1:
```python
NameError: name 'ParallelismConfig' is not defined
```
```python
RuntimeError: Type resolution failed for <class 'transformers.training_args.TrainingArguments'>. Try declaring the class in global scope or removing line of `from __future__ import annotations` which opts in Postponed Evaluation of Annotations (PEP 563)
```
This PR fixes a runtime compatibility issue in TrainingArguments with accelerate<1.10.1.
## Problem
With accelerate<1.10.1, the module `accelerate.parallelism_config` does not exist.
`TrainingArguments` currently annotates the field as:
```python
parallelism_config: Optional["ParallelismConfig"] = field(default=None)
```
When `HfArgumentParser` calls `typing.get_type_hints` on this dataclass, Python 3.12 attempts to resolve the `ForwardRef("ParallelismConfig")` and fails.
See failing CI in downstream TRL: https://github.com/huggingface/trl/actions/runs/17635808933/job/50111711153
## Minimal Reproducible Example
```bash
pip install transformers accelerate==1.4.0
```
```python
from transformers import HfArgumentParser, TrainingArguments
# This will trigger get_type_hints on TrainingArguments
parser = HfArgumentParser(TrainingArguments)
```
```python
Traceback (most recent call last):
File "huggingface/transformers/src/transformers/hf_argparser.py", line 258, in _add_dataclass_arguments
type_hints: dict[str, type] = get_type_hints(dtype)
^^^^^^^^^^^^^^^^^^^^^
File ".pyenv/versions/3.12.9/lib/python3.12/typing.py", line 2277, in get_type_hints
value = _eval_type(value, base_globals, base_locals, base.__type_params__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".pyenv/versions/3.12.9/lib/python3.12/typing.py", line 430, in _eval_type
ev_args = tuple(
^^^^^^
File ".pyenv/versions/3.12.9/lib/python3.12/typing.py", line 431, in <genexpr>
_eval_type(
File ".pyenv/versions/3.12.9/lib/python3.12/typing.py", line 415, in _eval_type
return t._evaluate(globalns, localns, type_params, recursive_guard=recursive_guard)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".pyenv/versions/3.12.9/lib/python3.12/typing.py", line 947, in _evaluate
eval(self.__forward_code__, globalns, localns),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 1, in <module>
NameError: name 'ParallelismConfig' is not defined
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "huggingface/transformers/src/transformers/hf_argparser.py", line 143, in __init__
self._add_dataclass_arguments(dtype)
File "huggingface/transformers/src/transformers/hf_argparser.py", line 260, in _add_dataclass_arguments
raise RuntimeError(
RuntimeError: Type resolution failed for <class 'transformers.training_args.TrainingArguments'>. Try declaring the class in global scope or removing line of `from __future__ import annotations` which opts in Postponed Evaluation of Annotations (PEP 563)
```
## Solution
- Always define the symbol ParallelismConfig at module scope: it is only used as type annotation
- Use the real class if accelerate>=1.10.1, otherwise fall back to Any.
- Replace the string annotation with `Optional[ParallelismConfig]` to avoid creating a `ForwardRef`
This ensures `get_type_hints(TrainingArguments)` works across Python versions and with older accelerate installs. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40818/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40818/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40817/comments | https://api.github.com/repos/huggingface/transformers/issues/40817/events | https://github.com/huggingface/transformers/issues/40817 | 3,405,857,048 | I_kwDOCUB6oc7LAUEY | 40,817 | Got shape error when running JetMoe | {
"login": "wtomin",
"id": 33117903,
"node_id": "MDQ6VXNlcjMzMTE3OTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/33117903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wtomin",
"html_url": "https://github.com/wtomin",
"followers_url": "https://api.github.com/users/wtomin/followers",
"following_url": "https://api.github.com/users/wtomin/following{/other_user}",
"gists_url": "https://api.github.com/users/wtomin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wtomin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wtomin/subscriptions",
"organizations_url": "https://api.github.com/users/wtomin/orgs",
"repos_url": "https://api.github.com/users/wtomin/repos",
"events_url": "https://api.github.com/users/wtomin/events{/privacy}",
"received_events_url": "https://api.github.com/users/wtomin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-11T10:18:58 | 2025-09-11T16:41:12 | 2025-09-11T16:41:12 | NONE | null | null | null | null | ### System Info
**Environment**
- Linux, A100 GPU (CUDA 12.1, driver version 530.30.02)
- python=3.10.0
- transformers=4.50.0 (upgrade to 4.56.1 leads to the same error)
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run this code snippet with transformers and `jetmoe/jetmoe-8b` (I found this example from #40749 ):
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("jetmoe/jetmoe-8b")
model = AutoModelForCausalLM.from_pretrained(
"jetmoe/jetmoe-8b",
device_map="auto",
attn_implementation="sdpa"
)
input_ids = tokenizer("The stock market rallied today after positive economic news", return_tensors="pt").to(model.device)
output = model.generate(**input_ids, cache_implementation=None)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Expected behavior
**Error message**
```bash
Traceback (most recent call last):
File "/data2/g00523483/ddd/JetMoE/run_model.py", line 12, in <module>
output = model.generate(**input_ids, cache_implementation=None)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/generation/utils.py", line 2326, in generate
result = self._sample(
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/generation/utils.py", line 3286, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/models/jetmoe/modeling_jetmoe.py", line 1336, in forward
outputs = self.model(
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/models/jetmoe/modeling_jetmoe.py", line 1083, in forward
layer_outputs = decoder_layer(
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/models/jetmoe/modeling_jetmoe.py", line 831, in forward
attn_output, self_attn_weights, present_key_value, attn_router_logits = self.self_attention(
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/models/jetmoe/modeling_jetmoe.py", line 637, in forward
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
File "/data2/g00523483/.conda/envs/ddd/lib/python3.10/site-packages/transformers/models/jetmoe/modeling_jetmoe.py", line 489, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
RuntimeError: The size of tensor a (128) must match the size of tensor b (64) at non-singleton dimension 3
``` | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40817/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40816/comments | https://api.github.com/repos/huggingface/transformers/issues/40816/events | https://github.com/huggingface/transformers/pull/40816 | 3,405,826,447 | PR_kwDOCUB6oc6n_2BO | 40,816 | Fix fla import for Qwen3-Next models | {
"login": "bozheng-hit",
"id": 8787969,
"node_id": "MDQ6VXNlcjg3ODc5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8787969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozheng-hit",
"html_url": "https://github.com/bozheng-hit",
"followers_url": "https://api.github.com/users/bozheng-hit/followers",
"following_url": "https://api.github.com/users/bozheng-hit/following{/other_user}",
"gists_url": "https://api.github.com/users/bozheng-hit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bozheng-hit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bozheng-hit/subscriptions",
"organizations_url": "https://api.github.com/users/bozheng-hit/orgs",
"repos_url": "https://api.github.com/users/bozheng-hit/repos",
"events_url": "https://api.github.com/users/bozheng-hit/events{/privacy}",
"received_events_url": "https://api.github.com/users/bozheng-hit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T10:11:08 | 2025-09-11T10:13:32 | 2025-09-11T10:13:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40816",
"html_url": "https://github.com/huggingface/transformers/pull/40816",
"diff_url": "https://github.com/huggingface/transformers/pull/40816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40816.patch",
"merged_at": null
} | This PR fix the import issue of the fla library for Qwen3-Next models. | {
"login": "bozheng-hit",
"id": 8787969,
"node_id": "MDQ6VXNlcjg3ODc5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8787969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozheng-hit",
"html_url": "https://github.com/bozheng-hit",
"followers_url": "https://api.github.com/users/bozheng-hit/followers",
"following_url": "https://api.github.com/users/bozheng-hit/following{/other_user}",
"gists_url": "https://api.github.com/users/bozheng-hit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bozheng-hit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bozheng-hit/subscriptions",
"organizations_url": "https://api.github.com/users/bozheng-hit/orgs",
"repos_url": "https://api.github.com/users/bozheng-hit/repos",
"events_url": "https://api.github.com/users/bozheng-hit/events{/privacy}",
"received_events_url": "https://api.github.com/users/bozheng-hit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40816/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40815/comments | https://api.github.com/repos/huggingface/transformers/issues/40815/events | https://github.com/huggingface/transformers/issues/40815 | 3,405,646,151 | I_kwDOCUB6oc7K_glH | 40,815 | get_decoder feature regression in 4.56.0 | {
"login": "KyleMylonakisProtopia",
"id": 122286752,
"node_id": "U_kgDOB0nyoA",
"avatar_url": "https://avatars.githubusercontent.com/u/122286752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyleMylonakisProtopia",
"html_url": "https://github.com/KyleMylonakisProtopia",
"followers_url": "https://api.github.com/users/KyleMylonakisProtopia/followers",
"following_url": "https://api.github.com/users/KyleMylonakisProtopia/following{/other_user}",
"gists_url": "https://api.github.com/users/KyleMylonakisProtopia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyleMylonakisProtopia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyleMylonakisProtopia/subscriptions",
"organizations_url": "https://api.github.com/users/KyleMylonakisProtopia/orgs",
"repos_url": "https://api.github.com/users/KyleMylonakisProtopia/repos",
"events_url": "https://api.github.com/users/KyleMylonakisProtopia/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyleMylonakisProtopia/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-11T09:25:12 | 2025-09-16T08:57:14 | 2025-09-16T08:57:14 | CONTRIBUTOR | null | null | null | null | ### System Info
In the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.
Unfortunately this introduced a significant behavior change in that `*CausalForLM` models no longer have the same behavior of having `get_decoder()` return the underlying base model.
For example a `MistralForCausalLM` model named `model` returns `None` when `model.get_decoder()` is called.
The logic for why is occurring is obvious when looking at the offending PR:
```python
def get_decoder(self):
"""
Best-effort lookup of the *decoder* module.
Order of attempts (covers ~85 % of current usages):
1. `self.decoder`
2. `self.model` (many wrappers store the decoder here)
3. `self.model.get_decoder()` (nested wrappers)
4. fallback: raise for the few exotic models that need a bespoke rule
"""
if hasattr(self, "decoder"):
return self.decoder
if hasattr(self, "model"):
inner = self.model
if hasattr(inner, "get_decoder"):
return inner.get_decoder()
return inner
return None
```
In these cases the `if hasattr(self, "model"):` conditional block is entered, and the underlying model has a `get_decoder` method, as it is a `PreTrainedModel`, as all transformers models are. This block will always be entered. At this point we are now in the decoder itself calling its `get_decoder` method. The decoder has no decoder or model attribute, so the function returns `None`, which is the passed to the parent caller.
There are a couple of ways this could be fixed, but I don't know what their current impact would be on other parts of the code. I may open a PR, but I am quite busy at the moment. @molbap @ArthurZucker since you were the authors and reviewers here, do you mind taking another look at this?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use `get_decoder` on say a `MistralForCausalLM` model.
### Expected behavior
The underlying `model` attribute should be returned for `*ForCausalLM` models, not None, as these models are decoder only models by transformers convention. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40815/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/40815/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40814/comments | https://api.github.com/repos/huggingface/transformers/issues/40814/events | https://github.com/huggingface/transformers/pull/40814 | 3,405,577,812 | PR_kwDOCUB6oc6n-_Of | 40,814 | [Bug fix #40813] Fix base_model_tp_plan of Starcoder2 model. | {
"login": "greg-kwasniewski1",
"id": 213329731,
"node_id": "U_kgDODLcnQw",
"avatar_url": "https://avatars.githubusercontent.com/u/213329731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greg-kwasniewski1",
"html_url": "https://github.com/greg-kwasniewski1",
"followers_url": "https://api.github.com/users/greg-kwasniewski1/followers",
"following_url": "https://api.github.com/users/greg-kwasniewski1/following{/other_user}",
"gists_url": "https://api.github.com/users/greg-kwasniewski1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greg-kwasniewski1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greg-kwasniewski1/subscriptions",
"organizations_url": "https://api.github.com/users/greg-kwasniewski1/orgs",
"repos_url": "https://api.github.com/users/greg-kwasniewski1/repos",
"events_url": "https://api.github.com/users/greg-kwasniewski1/events{/privacy}",
"received_events_url": "https://api.github.com/users/greg-kwasniewski1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T09:08:12 | 2025-09-15T08:46:32 | 2025-09-15T08:46:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40814",
"html_url": "https://github.com/huggingface/transformers/pull/40814",
"diff_url": "https://github.com/huggingface/transformers/pull/40814.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40814.patch",
"merged_at": "2025-09-15T08:46:32"
} | Fixes #40813
Second linear layer in `Starcoder2`s MLP `c_proj` should be row-sharded and not column-sharded. Otherwise, column-sharding both `c_fc` and `c_proj` results in incorrect sharding and tensor shape mismatch.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40814/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40813/comments | https://api.github.com/repos/huggingface/transformers/issues/40813/events | https://github.com/huggingface/transformers/issues/40813 | 3,405,554,972 | I_kwDOCUB6oc7K_KUc | 40,813 | Incorrect sharding configuration for Starcoder2 model | {
"login": "greg-kwasniewski1",
"id": 213329731,
"node_id": "U_kgDODLcnQw",
"avatar_url": "https://avatars.githubusercontent.com/u/213329731?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greg-kwasniewski1",
"html_url": "https://github.com/greg-kwasniewski1",
"followers_url": "https://api.github.com/users/greg-kwasniewski1/followers",
"following_url": "https://api.github.com/users/greg-kwasniewski1/following{/other_user}",
"gists_url": "https://api.github.com/users/greg-kwasniewski1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greg-kwasniewski1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greg-kwasniewski1/subscriptions",
"organizations_url": "https://api.github.com/users/greg-kwasniewski1/orgs",
"repos_url": "https://api.github.com/users/greg-kwasniewski1/repos",
"events_url": "https://api.github.com/users/greg-kwasniewski1/events{/privacy}",
"received_events_url": "https://api.github.com/users/greg-kwasniewski1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-11T09:02:53 | 2025-09-15T08:46:33 | 2025-09-15T08:46:33 | CONTRIBUTOR | null | null | null | null | ### System Info
Transformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0a0+5228986c39.nv25.06 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: tensor-parallel
- Using GPU in script?: yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tunning TP inference on `bigcode/starcoder2-7b` throws an error with incorrect tensor shapes due to `base_model_tp_plan` misconfiguration.
`demo.py`:
```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "bigcode/starcoder2-7b"
model = AutoModelForCausalLM.from_pretrained(model_id, tp_plan="auto")
model._tp_plan['model.layers.*.mlp.c_proj'] = 'rowwise'
print(f"TP plan: {model._tp_plan}, class: {type(model._tp_plan)}")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Can I help"
inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
# distributed run
outputs = model(inputs)
# print the output
print(outputs)
```
run with
```
torchrun --nproc_per_node=2 demo.py
```
The correct `base_model_tp_plan` should replace:
```
['model.layers.*.mlp.c_proj'] = 'colwise'
```
with
```
['model.layers.*.mlp.c_proj'] = 'rowwise'
```
### Expected behavior
Throws:
```
(...)
[rank0]: File "/lustre/fs1/portfolios/coreai/users/gkwasniewski/hf-repo/transformers/src/transformers/models/starcoder2/modeling_starcoder2.py", line 65, in forward
[rank0]: hidden_states = self.c_proj(hidden_states)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank0]: return inner()
[rank0]: ^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/linear.py", line 125, in forward
[rank0]: return F.linear(input, self.weight, self.bias)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 51, in inner
[rank0]: return disable_fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 850, in _fn
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_api.py", line 350, in __torch_dispatch__
[rank0]: return DTensor._op_dispatcher.dispatch(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_dispatch.py", line 160, in dispatch
[rank0]: self.sharding_propagator.propagate(op_info)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 266, in propagate
[rank0]: OutputSharding, self.propagate_op_sharding(op_info.schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 45, in __call__
[rank0]: return self.cache(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 279, in propagate_op_sharding_non_cached
[rank0]: out_tensor_meta = self._propagate_tensor_meta_non_cached(op_schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py", line 126, in _propagate_tensor_meta_non_cached
[rank0]: fake_out = op_schema.op(*fake_args, **fake_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
[rank0]: return self._op(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/utils/_stats.py", line 27, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1311, in __torch_dispatch__
[rank0]: return self.dispatch(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1932, in dispatch
[rank0]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1414, in _cached_dispatch_impl
[rank0]: output = self._dispatch_impl(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 2460, in _dispatch_impl
[rank0]: decomposition_table[func](*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_prims_common/wrappers.py", line 308, in _fn
[rank0]: result = fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_decomp/decompositions.py", line 84, in inner
[rank0]: r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_decomp/decompositions.py", line 1451, in addmm
[rank0]: out = alpha * torch.mm(mat1, mat2)
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/utils/_stats.py", line 27, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1311, in __torch_dispatch__
[rank0]: return self.dispatch(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1932, in dispatch
[rank0]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 1414, in _cached_dispatch_impl
[rank0]: output = self._dispatch_impl(func, types, args, kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_subclasses/fake_tensor.py", line 2554, in _dispatch_impl
[rank0]: r = func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
[rank0]: return self._op(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_prims_common/wrappers.py", line 308, in _fn
[rank0]: result = fn(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/_meta_registrations.py", line 2236, in meta_mm
[rank0]: torch._check(
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/__init__.py", line 1668, in _check
[rank0]: _check_with(RuntimeError, cond, message)
[rank0]: File "/usr/local/lib/python3.12/dist-packages/torch/__init__.py", line 1650, in _check_with
[rank0]: raise error_type(message_evaluated)
[rank0]: RuntimeError: a and b must have same reduction dim, but got [3, 9216] X [18432, 4608].
(...)
```
Expected output:
```
CausalLMOutputWithPast(loss=None, logits=tensor([[[ 0.6951, -2.9710, -12.8470, ..., -4.8511, -6.0277, -6.6027],
[ 2.4489, -0.3970, -1.9423, ..., -3.9063, -5.0727, -5.9155],
[ 4.5938, -0.8972, -1.5770, ..., -4.8748, -2.2605, -5.4515]]],
```
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40813/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40812/comments | https://api.github.com/repos/huggingface/transformers/issues/40812/events | https://github.com/huggingface/transformers/pull/40812 | 3,405,361,104 | PR_kwDOCUB6oc6n-Px2 | 40,812 | fix: XIELU act parameters not being casted to correct dtype | {
"login": "NanoCode012",
"id": 9899957,
"node_id": "MDQ6VXNlcjk4OTk5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9899957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NanoCode012",
"html_url": "https://github.com/NanoCode012",
"followers_url": "https://api.github.com/users/NanoCode012/followers",
"following_url": "https://api.github.com/users/NanoCode012/following{/other_user}",
"gists_url": "https://api.github.com/users/NanoCode012/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NanoCode012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NanoCode012/subscriptions",
"organizations_url": "https://api.github.com/users/NanoCode012/orgs",
"repos_url": "https://api.github.com/users/NanoCode012/repos",
"events_url": "https://api.github.com/users/NanoCode012/events{/privacy}",
"received_events_url": "https://api.github.com/users/NanoCode012/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T08:12:17 | 2025-09-15T09:05:56 | 2025-09-15T09:05:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40812",
"html_url": "https://github.com/huggingface/transformers/pull/40812",
"diff_url": "https://github.com/huggingface/transformers/pull/40812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40812.patch",
"merged_at": "2025-09-15T09:05:56"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Was in the process of integrating the new Swiss-ai's Apertus model into axolotl and found an error with the xielu_cuda method. Ran with a 4bit QLoRA config.
```bash
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/activations.py", line 282, in forward
return self._xielu_cuda_fn(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/activations.py", line 268, in _xielu_cuda
result = self._xielu_cuda_obj.forward(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Data type of x (c10::BFloat16) must match data type of alpha_p (float).
```
This error is raised from:
```cuda
TORCH_CHECK(x.dtype() == alpha_p.dtype(), "Data type of x (", x.dtype(),
") must match data type of alpha_p (", alpha_p.dtype(), ").");
TORCH_CHECK(x.dtype() == alpha_n.dtype(), "Data type of x (", x.dtype(),
") must match data type of alpha_n (", alpha_n.dtype(), ").");
```
https://github.com/rubber-duck-debug/xielu/blob/59d60314edf567ad6b9895a1ff5d7533a14ddc96/xielu/cuda/src/xielu_impl.cu#L395-L398
I see that it does start with `dtype=bf16` https://github.com/huggingface/transformers/blob/de01a22aff21d16532d8dd68806589ca6c73dd5c/src/transformers/activations.py#L203-L210
However, it could be due to some casting (maybe peft and FA) that caused it to end up being fp32.
This PR solves it by casting it to the hidden states. Since they are both 1D Tensor of 1element, there should be very minor cost to this cast.
Tested working downstream via monkey patching https://github.com/axolotl-ai-cloud/axolotl/pull/3144
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40812/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40811/comments | https://api.github.com/repos/huggingface/transformers/issues/40811/events | https://github.com/huggingface/transformers/pull/40811 | 3,405,272,952 | PR_kwDOCUB6oc6n98Z7 | 40,811 | Fixes in check_model_inputs, GPTBigCodeModel and ImageGPTModel | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T07:48:58 | 2025-10-06T14:34:26 | 2025-10-06T14:34:24 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40811",
"html_url": "https://github.com/huggingface/transformers/pull/40811",
"diff_url": "https://github.com/huggingface/transformers/pull/40811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40811.patch",
"merged_at": "2025-10-06T14:34:24"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40811/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40810/comments | https://api.github.com/repos/huggingface/transformers/issues/40810/events | https://github.com/huggingface/transformers/pull/40810 | 3,405,037,842 | PR_kwDOCUB6oc6n9I4Q | 40,810 | Update no split modules in T5Gemma model | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T06:40:07 | 2025-09-12T11:44:15 | 2025-09-12T10:44:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40810",
"html_url": "https://github.com/huggingface/transformers/pull/40810",
"diff_url": "https://github.com/huggingface/transformers/pull/40810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40810.patch",
"merged_at": "2025-09-12T10:44:57"
} | There's no T5GemmaBlock implementation. So FSDP no split modules should at least be these two kinds of layers. | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40810/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40809/comments | https://api.github.com/repos/huggingface/transformers/issues/40809/events | https://github.com/huggingface/transformers/pull/40809 | 3,404,880,123 | PR_kwDOCUB6oc6n8mH2 | 40,809 | Improve module name handling for local custom code | {
"login": "XuehaiPan",
"id": 16078332,
"node_id": "MDQ6VXNlcjE2MDc4MzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuehaiPan",
"html_url": "https://github.com/XuehaiPan",
"followers_url": "https://api.github.com/users/XuehaiPan/followers",
"following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}",
"gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions",
"organizations_url": "https://api.github.com/users/XuehaiPan/orgs",
"repos_url": "https://api.github.com/users/XuehaiPan/repos",
"events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuehaiPan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T05:44:44 | 2025-10-01T08:45:16 | 2025-09-16T13:11:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40809",
"html_url": "https://github.com/huggingface/transformers/pull/40809",
"diff_url": "https://github.com/huggingface/transformers/pull/40809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40809.patch",
"merged_at": "2025-09-16T13:11:49"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This is a follow-up for PR #40745.
- #40745
Fixes #40496
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40809/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40808/comments | https://api.github.com/repos/huggingface/transformers/issues/40808/events | https://github.com/huggingface/transformers/pull/40808 | 3,404,835,696 | PR_kwDOCUB6oc6n8ctE | 40,808 | Improve torch_dtype checks | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T05:20:57 | 2025-09-12T10:07:08 | 2025-09-12T09:57:59 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40808",
"html_url": "https://github.com/huggingface/transformers/pull/40808",
"diff_url": "https://github.com/huggingface/transformers/pull/40808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40808.patch",
"merged_at": "2025-09-12T09:57:59"
} | # What does this PR do?
Improve `torch_dtype` checks. The old check overwrites `dtype` when `torch_dtype`=`bf16` and `dtype` =`auto`. However, the intention of specifiying `auto` is different from not specifying the value. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40808/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40807/comments | https://api.github.com/repos/huggingface/transformers/issues/40807/events | https://github.com/huggingface/transformers/pull/40807 | 3,404,828,209 | PR_kwDOCUB6oc6n8bCz | 40,807 | Align torch implementation of Gated DeltaNet in Qwen3-Next with fla library. | {
"login": "bozheng-hit",
"id": 8787969,
"node_id": "MDQ6VXNlcjg3ODc5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8787969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozheng-hit",
"html_url": "https://github.com/bozheng-hit",
"followers_url": "https://api.github.com/users/bozheng-hit/followers",
"following_url": "https://api.github.com/users/bozheng-hit/following{/other_user}",
"gists_url": "https://api.github.com/users/bozheng-hit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bozheng-hit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bozheng-hit/subscriptions",
"organizations_url": "https://api.github.com/users/bozheng-hit/orgs",
"repos_url": "https://api.github.com/users/bozheng-hit/repos",
"events_url": "https://api.github.com/users/bozheng-hit/events{/privacy}",
"received_events_url": "https://api.github.com/users/bozheng-hit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T05:17:51 | 2025-09-11T11:10:16 | 2025-09-11T11:10:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40807",
"html_url": "https://github.com/huggingface/transformers/pull/40807",
"diff_url": "https://github.com/huggingface/transformers/pull/40807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40807.patch",
"merged_at": "2025-09-11T11:10:15"
} | This pull request aims to align the PyTorch implementation of Gated DeltaNet in Qwen3-Next with the corresponding implementation in the fla library, ensuring consistency between the two implementations. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40807/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40806/comments | https://api.github.com/repos/huggingface/transformers/issues/40806/events | https://github.com/huggingface/transformers/pull/40806 | 3,404,406,021 | PR_kwDOCUB6oc6n6_0m | 40,806 | Intel CPU dockerfile | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T01:26:04 | 2025-09-19T07:09:23 | 2025-09-17T15:42:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40806",
"html_url": "https://github.com/huggingface/transformers/pull/40806",
"diff_url": "https://github.com/huggingface/transformers/pull/40806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40806.patch",
"merged_at": "2025-09-17T15:42:31"
} | Upload Intel CPU dockerfile | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40806/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40805/comments | https://api.github.com/repos/huggingface/transformers/issues/40805/events | https://github.com/huggingface/transformers/pull/40805 | 3,404,393,999 | PR_kwDOCUB6oc6n69XM | 40,805 | RUFF fix on CI scripts | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T01:17:01 | 2025-09-19T14:18:47 | 2025-09-19T13:50:27 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40805",
"html_url": "https://github.com/huggingface/transformers/pull/40805",
"diff_url": "https://github.com/huggingface/transformers/pull/40805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40805.patch",
"merged_at": "2025-09-19T13:50:27"
} | # What does this PR do?
Remaining RUFF changes on CI. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40805/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40804/comments | https://api.github.com/repos/huggingface/transformers/issues/40804/events | https://github.com/huggingface/transformers/pull/40804 | 3,404,353,734 | PR_kwDOCUB6oc6n605K | 40,804 | Push generation config along with checkpoints | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-11T00:47:44 | 2025-09-12T09:05:39 | 2025-09-11T15:33:16 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40804",
"html_url": "https://github.com/huggingface/transformers/pull/40804",
"diff_url": "https://github.com/huggingface/transformers/pull/40804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40804.patch",
"merged_at": "2025-09-11T15:33:16"
} | Currently, every modelling file is pushed during training expect the generation config. Which makes testing the intermediate checkpoints impossible. This PR solves this. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40804/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40803/comments | https://api.github.com/repos/huggingface/transformers/issues/40803/events | https://github.com/huggingface/transformers/pull/40803 | 3,403,559,786 | PR_kwDOCUB6oc6n4G9U | 40,803 | [docstrings / type hints] Update outdated annotations for `past_key_values` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T18:47:56 | 2025-09-15T13:22:32 | 2025-09-15T08:52:32 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40803",
"html_url": "https://github.com/huggingface/transformers/pull/40803",
"diff_url": "https://github.com/huggingface/transformers/pull/40803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40803.patch",
"merged_at": "2025-09-15T08:52:32"
} | # What does this PR do?
🎯 part of the effort to efforce better standardization
We have been migrating `past_key_values` from the old tuple of tuples of tensors format to the new `Cache` format. However, many type hints and docstrings were not updated accordingly -- our users are getting incorrect information from these annotations 😮
This PR aims to reduce incorrect information. A few notes:
- I heavily relied on bulk changes, and I haven't double-checked all touched models to confirm they support `Cache`, the base class (as opposed to models like `mamba`). Nevertheless, even if there are a few inconsistencies, these models were previously annotated with the legacy format -- they are either models we didn't update due to low impact (and we'll likely deprecate soon), or the type hint was already incorrect to begin with 🤗
- `deprecated` models also received bulk changes, I don't think it's worth to manually revert them 🙈
- encoder-decoder models can have a more precise type hint and docs, I'll leave that for a future round. The updated docstring is also correct for them. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40803/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40803/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40802/comments | https://api.github.com/repos/huggingface/transformers/issues/40802/events | https://github.com/huggingface/transformers/pull/40802 | 3,403,223,009 | PR_kwDOCUB6oc6n27JI | 40,802 | Force new vision models addition to include a fast image processor | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T17:12:32 | 2025-09-25T15:58:19 | 2025-09-25T15:58:19 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40802",
"html_url": "https://github.com/huggingface/transformers/pull/40802",
"diff_url": "https://github.com/huggingface/transformers/pull/40802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40802.patch",
"merged_at": "2025-09-25T15:58:19"
} | # What does this PR do?
As the title says, this adds a test that will fail for models added after September 1st 2025, if they include a slow image processor but not a fast one.
Cc @qubvel as we discussed this | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40802/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40802/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40801/comments | https://api.github.com/repos/huggingface/transformers/issues/40801/events | https://github.com/huggingface/transformers/pull/40801 | 3,403,218,406 | PR_kwDOCUB6oc6n26Gh | 40,801 | feat: kv cache retention across conversations | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-10T17:11:06 | 2025-09-18T13:03:53 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40801",
"html_url": "https://github.com/huggingface/transformers/pull/40801",
"diff_url": "https://github.com/huggingface/transformers/pull/40801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40801.patch",
"merged_at": null
} | Adding a conversation id to the Cookies so that we can track conversations across http requests. This enables efficient KV Cache reuse. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40801/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40800/comments | https://api.github.com/repos/huggingface/transformers/issues/40800/events | https://github.com/huggingface/transformers/pull/40800 | 3,403,156,334 | PR_kwDOCUB6oc6n2sR5 | 40,800 | [SAM2] Fix inconsistent results with original implementation with input boxes | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6886428489,
"node_id": "LA_kwDOCUB6oc8AAAABmnaPSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/run-slow",
"name": "run-slow",
"color": "E1D519",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-09-10T16:50:43 | 2025-09-12T14:21:23 | 2025-09-12T14:21:23 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40800",
"html_url": "https://github.com/huggingface/transformers/pull/40800",
"diff_url": "https://github.com/huggingface/transformers/pull/40800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40800.patch",
"merged_at": "2025-09-12T14:21:23"
} | # What does this PR do?
Add a padding point to box embeds as is done in the original repo.
The prompt encoder of the original repo actually never uses its `_embed_boxes` method. Instead the boxesare first converted to points then passed to `_embed_points`, which has this padding logic whereas `_embed_boxes` doesn't... ([source](https://github.com/facebookresearch/sam2/blob/2b90b9f5ceec907a1c18123530e92e794ad901a4/sam2/modeling/sam2_base.py#L342))
To avoid misguiding users (like I was 🙃), I'm still using the _embed_boxes function in the Transformers implementation, but added the padding logic to it by default.
Fixes https://github.com/huggingface/transformers/issues/40787#issuecomment-3274223174 | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40800/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40799/comments | https://api.github.com/repos/huggingface/transformers/issues/40799/events | https://github.com/huggingface/transformers/pull/40799 | 3,403,112,292 | PR_kwDOCUB6oc6n2ioD | 40,799 | [Trainer] Fix DP loss | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T16:34:30 | 2025-09-18T13:07:21 | 2025-09-18T13:07:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40799",
"html_url": "https://github.com/huggingface/transformers/pull/40799",
"diff_url": "https://github.com/huggingface/transformers/pull/40799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40799.patch",
"merged_at": "2025-09-18T13:07:20"
} | # What does this PR do?
This PR fixes a couple of issues related to DP:
- hight fp16 norm_grad -> we need to prepare the optimizer, otherwise, we don't unscale the optimizer when doing gradient clipping + grad scaler
- Fix logic when using `num_items_in_batch`. For DP, when `num_items_per_batch is not None`, the value is by default the sum of the tokens across all devices. This is not the case for DDP for example. So, we need to be careful of that. When using `num_items_in_batch`, we weren't applying DP at all.
- Fix `num_input_tokens_seen` as it was always logged
Fixes https://github.com/huggingface/transformers/pull/40610, https://github.com/huggingface/transformers/issues/37474 and https://github.com/huggingface/transformers/pull/38938
In any case, users should use DDP instead of DP. I might start adding some warning to prompt users to switch
Fixes the following tests:
`test_gradient_accumulation`, `test_gradient_accumulation_loss_alignment_with_loss_func`, `test_gradient_accumulation_loss_alignment_with_model_loss`
# Reproducer
Combination of `fp16`, `average_tokens_across_devices` should all give the same loss approximately
```python
from datasets import load_dataset
from transformers import (
AutoConfig,
AutoTokenizer,
DataCollatorForLanguageModeling,
GPTNeoXForCausalLM,
TrainingArguments,
Trainer
)
config = AutoConfig.from_pretrained('EleutherAI/pythia-14m')
model = GPTNeoXForCausalLM(config=config).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/pythia-14m')
tokenizer.pad_token = tokenizer.eos_token
train_data = load_dataset("wiwu2390/minipile-100k", split="train")
def tokenize_function(sample):
return tokenizer(sample["text"], truncation=True, max_length=512)
tokenized_dataset = train_data.map(tokenize_function, batched=True, remove_columns=["text"])
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False
)
training_args = TrainingArguments(
output_dir="../data/pythia-14m-minipile-100k",
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
gradient_accumulation_steps=4,
logging_steps=1,
save_steps=100,
learning_rate=1e-3,
weight_decay=0.01,
warmup_steps=100,
fp16=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
tokenizer=tokenizer,
data_collator=data_collator,
)
trainer.train()
```
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40799/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40799/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40798/comments | https://api.github.com/repos/huggingface/transformers/issues/40798/events | https://github.com/huggingface/transformers/issues/40798 | 3,403,030,113 | I_kwDOCUB6oc7K1h5h | 40,798 | SigLIP2 loss is incomplete. Add the other loss. | {
"login": "SuperHotDogCat",
"id": 115141367,
"node_id": "U_kgDOBtzq9w",
"avatar_url": "https://avatars.githubusercontent.com/u/115141367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SuperHotDogCat",
"html_url": "https://github.com/SuperHotDogCat",
"followers_url": "https://api.github.com/users/SuperHotDogCat/followers",
"following_url": "https://api.github.com/users/SuperHotDogCat/following{/other_user}",
"gists_url": "https://api.github.com/users/SuperHotDogCat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SuperHotDogCat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SuperHotDogCat/subscriptions",
"organizations_url": "https://api.github.com/users/SuperHotDogCat/orgs",
"repos_url": "https://api.github.com/users/SuperHotDogCat/repos",
"events_url": "https://api.github.com/users/SuperHotDogCat/events{/privacy}",
"received_events_url": "https://api.github.com/users/SuperHotDogCat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-10T16:03:06 | 2025-10-10T08:11:55 | null | NONE | null | null | null | null | ### Feature request
<img width="1368" height="1212" alt="Image" src="https://github.com/user-attachments/assets/fd6da07d-23ce-42fb-a7d0-0ff646ae918c" />
As far as I know, original SigLIP2's loss contains three parts, Sigmoid loss, LocCa loss, and SILC/TIPS loss.
However, in the Transformers library, the loss for SigLIP2 is currently implemented only as a sigmoid loss, just like the original [SigLIP.](https://github.com/huggingface/transformers/blob/abbed7010be490a24c46aa90dedaa7aa48d78487/src/transformers/models/siglip2/modeling_siglip2.py#L1061)
So, why not implementing all the loss for SigLIP2?
```python
if return_loss:
# Adapted from https://github.com/google-research/big_vision/blob/01edb81a4716f93a48be43b3a4af14e29cdb3a7f/big_vision/trainers/proj/image_text/siglip2.py#L287
eye = torch.eye(logits_per_text.size(0), device=logits_per_text.device)
m1_diag1 = -torch.ones_like(logits_per_text) + 2 * eye
loglik = torch.nn.functional.logsigmoid(m1_diag1 * logits_per_text)
nll = -torch.sum(loglik, dim=-1)
loss = nll.mean()
```
### Motivation
With this limitation, it may not fully support the need for fine-tuning SigLIP2.
Do you have any plans to address this issue in the Transformers library in the future?
### Your contribution
I would like to help with the implementation. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40798/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40797/comments | https://api.github.com/repos/huggingface/transformers/issues/40797/events | https://github.com/huggingface/transformers/pull/40797 | 3,402,773,154 | PR_kwDOCUB6oc6n1YT_ | 40,797 | [Sam2Video] Fix video inference with batched boxes and add test | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6886428489,
"node_id": "LA_kwDOCUB6oc8AAAABmnaPSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/run-slow",
"name": "run-slow",
"color": "E1D519",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-09-10T14:42:37 | 2025-09-12T14:33:28 | 2025-09-12T14:33:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40797",
"html_url": "https://github.com/huggingface/transformers/pull/40797",
"diff_url": "https://github.com/huggingface/transformers/pull/40797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40797.patch",
"merged_at": "2025-09-12T14:33:28"
} | # What does this PR do?
As the title says.
Fixes https://github.com/huggingface/transformers/issues/40770 | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40797/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40796/comments | https://api.github.com/repos/huggingface/transformers/issues/40796/events | https://github.com/huggingface/transformers/pull/40796 | 3,402,569,212 | PR_kwDOCUB6oc6n0rQ1 | 40,796 | [`RMSNorm`] Fix rms norm init for models that center around 1 | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T13:48:04 | 2025-09-19T12:15:54 | 2025-09-19T12:15:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40796",
"html_url": "https://github.com/huggingface/transformers/pull/40796",
"diff_url": "https://github.com/huggingface/transformers/pull/40796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40796.patch",
"merged_at": "2025-09-19T12:15:36"
} | The gemma model family + qwen3 next use a flavor of RMSNorm that centers around 1:
https://github.com/huggingface/transformers/blob/75202b09283ee48263642114013155034459195a/src/transformers/models/gemma3/modeling_gemma3.py#L146
The issue here that this accentuates computations with our standard init of 1 (1+1=2, doubling effect) - these can be seen on flaky fa tests for example. This PR addresses the init to be centered around 1 again just as the other models.
cc @Cyrilvallez
cc @ydshieh for viz | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40796/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40795/comments | https://api.github.com/repos/huggingface/transformers/issues/40795/events | https://github.com/huggingface/transformers/pull/40795 | 3,402,476,665 | PR_kwDOCUB6oc6n0W_9 | 40,795 | Adding Support for Qwen3-VL Series | {
"login": "JJJYmmm",
"id": 92386084,
"node_id": "U_kgDOBYGzJA",
"avatar_url": "https://avatars.githubusercontent.com/u/92386084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JJJYmmm",
"html_url": "https://github.com/JJJYmmm",
"followers_url": "https://api.github.com/users/JJJYmmm/followers",
"following_url": "https://api.github.com/users/JJJYmmm/following{/other_user}",
"gists_url": "https://api.github.com/users/JJJYmmm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JJJYmmm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JJJYmmm/subscriptions",
"organizations_url": "https://api.github.com/users/JJJYmmm/orgs",
"repos_url": "https://api.github.com/users/JJJYmmm/repos",
"events_url": "https://api.github.com/users/JJJYmmm/events{/privacy}",
"received_events_url": "https://api.github.com/users/JJJYmmm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T13:21:33 | 2025-09-20T03:14:17 | 2025-09-15T10:46:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40795",
"html_url": "https://github.com/huggingface/transformers/pull/40795",
"diff_url": "https://github.com/huggingface/transformers/pull/40795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40795.patch",
"merged_at": "2025-09-15T10:46:18"
} | # Adding Support for Qwen3-VL Series
This PR introduces support for the upcoming **Qwen3-VL** models, including dense and MoE variants, as well as the Instruct and Thinking versions. As the next generation of the Qwen-VL family, Qwen3-VL brings notable advances in visual understanding while preserving strong pure-text capabilities, achieving superior performance across complex multimodal tasks.
Special thanks to @Cyrilvallez, @ArthurZucker, and @zucchini-nlp for their valuable feedback and thorough reviews! 🙏
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40795/reactions",
"total_count": 176,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 31,
"confused": 0,
"heart": 39,
"rocket": 97,
"eyes": 9
} | https://api.github.com/repos/huggingface/transformers/issues/40795/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40794/comments | https://api.github.com/repos/huggingface/transformers/issues/40794/events | https://github.com/huggingface/transformers/issues/40794 | 3,402,345,539 | I_kwDOCUB6oc7Ky6xD | 40,794 | Feature Request: Option to transfer logits to CPU during generation | {
"login": "YunruiZhang",
"id": 53505797,
"node_id": "MDQ6VXNlcjUzNTA1Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/53505797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YunruiZhang",
"html_url": "https://github.com/YunruiZhang",
"followers_url": "https://api.github.com/users/YunruiZhang/followers",
"following_url": "https://api.github.com/users/YunruiZhang/following{/other_user}",
"gists_url": "https://api.github.com/users/YunruiZhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YunruiZhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YunruiZhang/subscriptions",
"organizations_url": "https://api.github.com/users/YunruiZhang/orgs",
"repos_url": "https://api.github.com/users/YunruiZhang/repos",
"events_url": "https://api.github.com/users/YunruiZhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/YunruiZhang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-10T12:46:28 | 2025-09-19T21:15:52 | null | NONE | null | null | null | null | ### Feature request
Currently, in model.generate, the Transformers implementation stores all logits on GPU until the generation is finished, at which point they are returned as a PyTorch tensor.
This design causes significant GPU memory usage, especially for long generations, since the logits for every token must remain in GPU memory until the end. As a result, the usable GPU memory is reduced, limiting batch size and sequence length when users want to return logits.
Proposed feature:
It would be very useful to add an option that transfers the logits to CPU at each step of generation (e.g., per token), and stores them as NumPy arrays (or CPU tensors). This would free up GPU memory during generation while still allowing users to access the logits afterwards.
### Motivation
When using model.generate with output_scores=True, the logits for all generated tokens are accumulated on GPU until the generation finishes. For long sequences or larger models, this quickly consumes a large portion of GPU memory, which limits batch size, sequence length, and overall usability.
### Your contribution
I’m happy to help, but I’m not very familiar with the current codebase. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40794/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40793/comments | https://api.github.com/repos/huggingface/transformers/issues/40793/events | https://github.com/huggingface/transformers/pull/40793 | 3,402,317,928 | PR_kwDOCUB6oc6nz0WZ | 40,793 | Validate processing kwargs with @strict from huggingface_hub | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T12:39:29 | 2025-10-08T14:31:02 | 2025-10-08T14:14:09 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40793",
"html_url": "https://github.com/huggingface/transformers/pull/40793",
"diff_url": "https://github.com/huggingface/transformers/pull/40793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40793.patch",
"merged_at": "2025-10-08T14:14:09"
} | # What does this PR do?
Draft PR which will allow us to have a strict type validation on all processing kwargs without having to add a dataclass object for each. The idea is to keep `TypedDict` for hinting and dynamically adapt a `TypedDict` to be compatible with `huggingface_hub.strict` validators
This will allow us to get rid of some validations we already have in vision processing and enforce a better validation on all kwargs
For reviewers: I recommend to start from `/utils/type_validators.py` and `processing_utils.py`. The model files just fix incorrect and incomplete type hints we had | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40793/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40793/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40792/comments | https://api.github.com/repos/huggingface/transformers/issues/40792/events | https://github.com/huggingface/transformers/pull/40792 | 3,402,118,195 | PR_kwDOCUB6oc6nzIuQ | 40,792 | Read config pattern for Qwen3Next | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T11:40:30 | 2025-09-10T13:18:54 | 2025-09-10T13:18:52 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40792",
"html_url": "https://github.com/huggingface/transformers/pull/40792",
"diff_url": "https://github.com/huggingface/transformers/pull/40792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40792.patch",
"merged_at": "2025-09-10T13:18:52"
} | # What does this PR do?
If it's present on the hub config, and layer_types is not, it will be read | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40792/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40792/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40791/comments | https://api.github.com/repos/huggingface/transformers/issues/40791/events | https://github.com/huggingface/transformers/pull/40791 | 3,402,108,587 | PR_kwDOCUB6oc6nzGnl | 40,791 | [gemma3] `Gemma3ForConditionalGeneration` compatible with assisted generation | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T11:37:54 | 2025-09-16T14:10:03 | 2025-09-16T14:08:49 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40791",
"html_url": "https://github.com/huggingface/transformers/pull/40791",
"diff_url": "https://github.com/huggingface/transformers/pull/40791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40791.patch",
"merged_at": "2025-09-16T14:08:49"
} | # What does this PR do?
Enables assisted generation on `Gemma3ForConditionalGeneration`. Most of the novelty consists of a more precise compile-compatible prefill detection. If the new pattern is desirable, I can push it into more models :)
(This also fixes flaky issues in `Gemma3Vision2TextModelTest::test_prompt_lookup_decoding_matches_greedy_search` seen on `main`) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40791/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40791/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40790/comments | https://api.github.com/repos/huggingface/transformers/issues/40790/events | https://github.com/huggingface/transformers/pull/40790 | 3,402,050,226 | PR_kwDOCUB6oc6ny6XO | 40,790 | Handle loading non-existent checkpoints or corrupted checkpoints. | {
"login": "zhengchenyu",
"id": 10381583,
"node_id": "MDQ6VXNlcjEwMzgxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/10381583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhengchenyu",
"html_url": "https://github.com/zhengchenyu",
"followers_url": "https://api.github.com/users/zhengchenyu/followers",
"following_url": "https://api.github.com/users/zhengchenyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhengchenyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhengchenyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhengchenyu/subscriptions",
"organizations_url": "https://api.github.com/users/zhengchenyu/orgs",
"repos_url": "https://api.github.com/users/zhengchenyu/repos",
"events_url": "https://api.github.com/users/zhengchenyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhengchenyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-10T11:18:49 | 2025-09-25T03:42:34 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40790",
"html_url": "https://github.com/huggingface/transformers/pull/40790",
"diff_url": "https://github.com/huggingface/transformers/pull/40790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40790.patch",
"merged_at": null
} | # What does this PR do?
* 1 Handle loading non-existent checkpoints
Setting resume_from_checkpoint to true at the start of training will result in an error because the latest checkpoint cannot be found. Therefore, we should only set to false or null at the beginning. If you then interrupt training and want to resume from the latest checkpoint, we need to set resume_from_checkpoint to true. This adjustment is unnecessary. If the latest checkpoint cannot be found at the start of training, simply print the message; there is no need to raise an exception.
* 2 Handle loading corrupted checkpoints
I've noticed that an exception during the checkpoint process can interrupt the checkpoint, resulting in corrupted files. This can cause errors when loading the checkpoint. This PR add "latest" tag to the checkpoint to ensure the checkpoint is complete.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40790/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40789/comments | https://api.github.com/repos/huggingface/transformers/issues/40789/events | https://github.com/huggingface/transformers/pull/40789 | 3,401,969,609 | PR_kwDOCUB6oc6nyoyL | 40,789 | Fix invalid PipelineParallel member | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T10:53:14 | 2025-09-10T12:11:41 | 2025-09-10T12:06:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40789",
"html_url": "https://github.com/huggingface/transformers/pull/40789",
"diff_url": "https://github.com/huggingface/transformers/pull/40789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40789.patch",
"merged_at": "2025-09-10T12:06:37"
} | # What does this PR do?
Fixes a syntactic error. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40789/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40788/comments | https://api.github.com/repos/huggingface/transformers/issues/40788/events | https://github.com/huggingface/transformers/pull/40788 | 3,401,798,555 | PR_kwDOCUB6oc6nyEw8 | 40,788 | Fix typing | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T10:01:05 | 2025-09-23T11:38:00 | 2025-09-23T11:36:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40788",
"html_url": "https://github.com/huggingface/transformers/pull/40788",
"diff_url": "https://github.com/huggingface/transformers/pull/40788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40788.patch",
"merged_at": "2025-09-23T11:36:02"
} | # What does this PR do?
Add `Optional`, fix `np.array` and fix `noqa: F821` in typing. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40788/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40787/comments | https://api.github.com/repos/huggingface/transformers/issues/40787/events | https://github.com/huggingface/transformers/issues/40787 | 3,401,757,858 | I_kwDOCUB6oc7KwrSi | 40,787 | SAM2 - Deteriorated performance compared to original repository | {
"login": "alex-bene",
"id": 34627055,
"node_id": "MDQ6VXNlcjM0NjI3MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/34627055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-bene",
"html_url": "https://github.com/alex-bene",
"followers_url": "https://api.github.com/users/alex-bene/followers",
"following_url": "https://api.github.com/users/alex-bene/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-bene/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-bene/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-bene/subscriptions",
"organizations_url": "https://api.github.com/users/alex-bene/orgs",
"repos_url": "https://api.github.com/users/alex-bene/repos",
"events_url": "https://api.github.com/users/alex-bene/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-bene/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-10T09:48:55 | 2025-09-11T13:19:38 | 2025-09-11T13:19:38 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.56.1
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX 5000 Ada Generation
### Who can help?
@amyeroberts, @qubvel
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from PIL import Image
from transformers import Sam2Processor, Sam2Model
import torch
from sam2.sam2_image_predictor import SAM2ImagePredictor
import requests
from matplotlib import pyplot as plt
import numpy as np
def process_seg_masks(
image: Image.Image, masks: np.ndarray | None, colormap: str = "tab10", transparency: float = 0.5
) -> dict[str, Image.Image]:
image_mode = image.mode
full_image = image.copy().convert("RGBA")
# Generate colors from colormap
colors = plt.get_cmap(colormap)(np.arange(len(masks)))
colors = [tuple(color) for color in (colors * 255).astype(int).tolist()]
for i, mask in enumerate(masks):
# Create RGBA image with mask as transparency (alpha channel)
# Convert mask to PIL Image (assuming mask is 0-1 or 0-255)
seg_arr = (((mask * 255) if mask.max() <= 1.0 else mask) * transparency).astype(np.uint8)
# Create binary mask image
seg_mask = Image.fromarray(seg_arr)
overlay_color = Image.new("RGBA", full_image.size, colors[i])
full_image = Image.composite(overlay_color, full_image, seg_mask)
return full_image.convert(image_mode)
image_url = "https://huggingface.co/datasets/hf-internal-testing/sam2-fixtures/resolve/main/truck.jpg"
raw_image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
# Define bounding box as [x_min, y_min, x_max, y_max]
input_boxes = [[[75, 275, 1725, 850]]]
predictor = SAM2ImagePredictor.from_pretrained("facebook/sam2.1-hiera-small")
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
predictor.set_image(raw_image)
masks, iou_scores, _ = predictor.predict(box=input_boxes)
mask_sort_idxs = np.argsort(iou_scores)[::-1]
masks = masks[mask_sort_idxs]
iou_scores = iou_scores[mask_sort_idxs]
model = Sam2Model.from_pretrained("facebook/sam2.1-hiera-small")
processor = Sam2Processor.from_pretrained("facebook/sam2.1-hiera-small")
inputs = processor(images=raw_image, input_boxes=input_boxes, return_tensors="pt")
# Get segmentation mask
with torch.no_grad():
outputs = model(**inputs)
# Postprocess masks
masks_hf = processor.post_process_masks(outputs.pred_masks, inputs["original_sizes"])[0][0].cpu().numpy()
iou_scores_hf = outputs.iou_scores[0][0].cpu().numpy()
mask_sort_idxs = np.argsort(iou_scores_hf)[::-1]
masks_hf = masks_hf[mask_sort_idxs]
iou_scores_hf = iou_scores_hf[mask_sort_idxs]
# Print IoU Scores
print(iou_scores_hf)
print(iou_scores)
"""
[0.9442735 0.7145203 0.69228107]
[0.98828125 0.98046875 0.9453125 ]
"""
# Show best masks
image_draw_masks = process_seg_masks(raw_image, masks=[masks[0]])
image_draw_masks_hf = process_seg_masks(raw_image, masks=[masks_hf[0]])
display(image_draw_masks)
display(image_draw_masks_hf)
```
### Original code predictions
<img width="1800" height="1200" alt="Image" src="https://github.com/user-attachments/assets/d8bd007a-aca2-4267-8561-f3a17ccf8fed" />
### HF predictions
<img width="1800" height="1200" alt="Image" src="https://github.com/user-attachments/assets/78f632f7-c519-4622-a94d-0bf97f734aa9" />
### Expected behavior
I would expect the original and the HF implementations to output the same results. Is there something I am not understanding in terms of result reproduction? | {
"login": "alex-bene",
"id": 34627055,
"node_id": "MDQ6VXNlcjM0NjI3MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/34627055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-bene",
"html_url": "https://github.com/alex-bene",
"followers_url": "https://api.github.com/users/alex-bene/followers",
"following_url": "https://api.github.com/users/alex-bene/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-bene/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-bene/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-bene/subscriptions",
"organizations_url": "https://api.github.com/users/alex-bene/orgs",
"repos_url": "https://api.github.com/users/alex-bene/repos",
"events_url": "https://api.github.com/users/alex-bene/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-bene/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40787/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40786/comments | https://api.github.com/repos/huggingface/transformers/issues/40786/events | https://github.com/huggingface/transformers/pull/40786 | 3,401,655,651 | PR_kwDOCUB6oc6nxmHE | 40,786 | Processor load with multi-processing | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-09-10T09:19:37 | 2025-09-17T07:46:49 | 2025-09-17T07:46:49 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40786",
"html_url": "https://github.com/huggingface/transformers/pull/40786",
"diff_url": "https://github.com/huggingface/transformers/pull/40786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40786.patch",
"merged_at": "2025-09-17T07:46:49"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/40731. It happens once in several runs and quite hard to reproduce. TBH I reproduced it only once and can't get the issue anymore, but from visual inspection looks like the root was in calling `cached_files` with several files
When we have many files at the input, hf hub internally calls `thread_map` to make the loading faster. So it might have been the issue with #40731 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40786/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40785/comments | https://api.github.com/repos/huggingface/transformers/issues/40785/events | https://github.com/huggingface/transformers/issues/40785 | 3,401,548,410 | I_kwDOCUB6oc7Kv4J6 | 40,785 | Granite 20B Function Calling model hallucinating the tool call | {
"login": "dvn8weil",
"id": 190058927,
"node_id": "U_kgDOC1QRrw",
"avatar_url": "https://avatars.githubusercontent.com/u/190058927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvn8weil",
"html_url": "https://github.com/dvn8weil",
"followers_url": "https://api.github.com/users/dvn8weil/followers",
"following_url": "https://api.github.com/users/dvn8weil/following{/other_user}",
"gists_url": "https://api.github.com/users/dvn8weil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvn8weil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvn8weil/subscriptions",
"organizations_url": "https://api.github.com/users/dvn8weil/orgs",
"repos_url": "https://api.github.com/users/dvn8weil/repos",
"events_url": "https://api.github.com/users/dvn8weil/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvn8weil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-10T08:49:12 | 2025-10-13T15:00:01 | 2025-10-13T15:00:01 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.3
- Platform: Linux-6.14.0-1014-gcp-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA L4
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
vllm command to run the model :
```
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 VLLM_USE_CUDA=1 vllm serve ibm-granite/granite-20b-functioncalling --tool-call-parser granite --enable-auto-tool-choice --tensor-parallel-size 2 --port 8000 --trust-remote-code --chat-template granite_template.jinja
```
with the chat-template file :
```
{% for message in messages %}
{% if message['role'] == 'system' %}
<|start_of_role|>system<|end_of_role|>{{ message['content'] }}<|end_of_text|>
{% elif message['role'] == 'user' %}
<|start_of_role|>user<|end_of_role|>{{ message['content'] }}<|end_of_text|>
{% elif message['role'] == 'assistant' %}
<|start_of_role|>assistant<|end_of_role|>{{ message['content'] }}<|end_of_text|>
{% elif message['role'] == 'tool' or message['role'] == 'function' %}
<|start_of_role|>tool<|end_of_role|>{{ message['content'] }}<|end_of_text|>
{% endif %}
{% endfor %}
<|start_of_role|>assistant<|end_of_role|>
```
curl request :
```
curl --location 'http://0.0.0.0:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer EMPTY' \
--data '{
"model": "ibm-granite/granite-20b-functioncalling",
"messages": [
{
"role": "system",
"content": "When outputting a tool call, include all the arguments in the tool call token stream. Don'\''t emit partial tool calls. For instance, for the add tool call, output both arguments x and y. Please use tools as much as possible, instead of relying on your own knowledge when it makes more sense to use the tools. You may ONLY call tools that are explicitly listed in the provided tool schema. Never invent tool names or parameters that are not in the schema. If no tool is appropriate, answer the user naturally without calling a tool. When calling a tool, include all required arguments exactly as specified in the schema."
},
{
"role": "user",
"content": "tell me the weather of San Fransisco today"
}
],
"temperature": 0.3,
"top_p": 0.9,
"tool_choice": "auto",
"tools": [
{
"type": "function",
"function": {
"name": "add",
"description": "adds two numbers",
"parameters": {
"type": "object",
"properties": {
"x": { "type": "number" },
"y": { "type": "number" }
},
"required": ["x", "y"]
}
}
},
{
"type": "function",
"function": {
"name": "multiply",
"description": "multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"x": { "type": "number" },
"y": { "type": "number" }
},
"required": ["x", "y"]
}
}
},
{
"type": "function",
"function": {
"name": "search_web",
"description": "Search the internet for current information, news, and real-time data. MUST be used for any queries about recent events, current news, latest information, or anything that requires up-to-date data beyond the model'\''s training cutoff",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
}
}
}
]
}'
```
### Expected behavior
the response to the curl request:
```
{
"id": "chatcmpl-6bf374155cef412383696be302539941",
"object": "chat.completion",
"created": 1757492261,
"model": "ibm-granite/granite-20b-functioncalling",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "<function_call> {\"name\": \"get_weather\", \"arguments\": '{\"location\": \"San Fransisco\", \"date\": \"today\"}'} ",
"refusal": null,
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": [],
"reasoning_content": null
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"service_tier": null,
"system_fingerprint": null,
"usage": {
"prompt_tokens": 205,
"total_tokens": 239,
"completion_tokens": 34,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"kv_transfer_params": null
}
```
the response shows function call `get_weather` while no such function call exists in the schema provided in the curl request. It should have instead used the `search_web` function call. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40785/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40784/comments | https://api.github.com/repos/huggingface/transformers/issues/40784/events | https://github.com/huggingface/transformers/pull/40784 | 3,401,487,182 | PR_kwDOCUB6oc6nxCpl | 40,784 | Remove use_ipex option from Trainer | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T08:30:59 | 2025-09-11T00:25:25 | 2025-09-10T17:00:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40784",
"html_url": "https://github.com/huggingface/transformers/pull/40784",
"diff_url": "https://github.com/huggingface/transformers/pull/40784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40784.patch",
"merged_at": "2025-09-10T17:00:15"
} | # What does this PR do?
It has been deprecated. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40784/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40783/comments | https://api.github.com/repos/huggingface/transformers/issues/40783/events | https://github.com/huggingface/transformers/pull/40783 | 3,401,440,346 | PR_kwDOCUB6oc6nw5Ds | 40,783 | Fix None quantization_config equivalence with omitted param in AutoModel.from_pretrained | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-10T08:17:21 | 2025-10-04T17:26:02 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40783",
"html_url": "https://github.com/huggingface/transformers/pull/40783",
"diff_url": "https://github.com/huggingface/transformers/pull/40783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40783.patch",
"merged_at": null
} | Fix `quantization_config=None` equivalence with omitted parameter in `AutoModel.from_pretrained`.
Currently, `AutoModel.from_pretrained` treats differently missing and `None` `quantization_config`. I think passing `None` should be treated the same as omitting the key altogether, not as an invalid quantization config.
This PR:
- Fixes inconsistent handling of `quantization_config=None` vs omitted parameter in `AutoModel.from_pretrained`
- Ensures `quantization_config=None` behaves identically to omitting the parameter entirely
## Problem
When using TRL CLI with GPT OSS models, calls like:
```python
model = AutoModelForCausalLM.from_pretrained("my_model", quantization_config=None)
```
would fail with `AttributeError: 'NoneType' object has no attribute 'to_dict'`, differently than:
```python
model = AutoModelForCausalLM.from_pretrained("my_model")
```
Error traceback:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/fsx/albert/dev/transformers/src/transformers/models/auto/auto_factory.py", line 549, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/fsx/albert/dev/transformers/src/transformers/models/auto/configuration_auto.py", line 1323, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
File "/fsx/albert/dev/transformers/src/transformers/configuration_utils.py", line 839, in from_dict
logger.info(f"Model config {config}")
File "/fsx/albert/dev/transformers/src/transformers/configuration_utils.py", line 873, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "/fsx/albert/dev/transformers/src/transformers/configuration_utils.py", line 985, in to_json_string
config_dict = self.to_diff_dict()
File "/fsx/albert/dev/transformers/src/transformers/configuration_utils.py", line 887, in to_diff_dict
config_dict = self.to_dict()
File "/fsx/albert/dev/transformers/src/transformers/configuration_utils.py", line 964, in to_dict
self.quantization_config.to_dict()
AttributeError: 'NoneType' object has no attribute 'to_dict'
```
See comment in downstream hotfix:
- https://github.com/huggingface/trl/pull/4019#pullrequestreview-3199965520
CC: @qgallouedec | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40783/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40782/comments | https://api.github.com/repos/huggingface/transformers/issues/40782/events | https://github.com/huggingface/transformers/pull/40782 | 3,401,386,964 | PR_kwDOCUB6oc6nwt4U | 40,782 | Fix typos in src | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T08:02:56 | 2025-09-11T12:47:13 | 2025-09-11T12:15:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40782",
"html_url": "https://github.com/huggingface/transformers/pull/40782",
"diff_url": "https://github.com/huggingface/transformers/pull/40782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40782.patch",
"merged_at": "2025-09-11T12:15:15"
} | # What does this PR do?
As the title says. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40782/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40781/comments | https://api.github.com/repos/huggingface/transformers/issues/40781/events | https://github.com/huggingface/transformers/issues/40781 | 3,400,933,491 | I_kwDOCUB6oc7KtiBz | 40,781 | Not save processor related files when call save_model | {
"login": "zyandtom",
"id": 71203151,
"node_id": "MDQ6VXNlcjcxMjAzMTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/71203151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyandtom",
"html_url": "https://github.com/zyandtom",
"followers_url": "https://api.github.com/users/zyandtom/followers",
"following_url": "https://api.github.com/users/zyandtom/following{/other_user}",
"gists_url": "https://api.github.com/users/zyandtom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyandtom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyandtom/subscriptions",
"organizations_url": "https://api.github.com/users/zyandtom/orgs",
"repos_url": "https://api.github.com/users/zyandtom/repos",
"events_url": "https://api.github.com/users/zyandtom/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyandtom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-10T05:05:18 | 2025-09-10T09:40:40 | 2025-09-10T09:40:40 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.55.0
- Platform: Linux-5.10.0-1.0.0.28-x86_64-with-glibc2.31
- Python version: 3.10.18
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: 0.16.9
- PyTorch version (accelerator?): 2.7.0+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
trainer.save_model(output_dir)
### Expected behavior
save processor related files | {
"login": "zyandtom",
"id": 71203151,
"node_id": "MDQ6VXNlcjcxMjAzMTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/71203151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyandtom",
"html_url": "https://github.com/zyandtom",
"followers_url": "https://api.github.com/users/zyandtom/followers",
"following_url": "https://api.github.com/users/zyandtom/following{/other_user}",
"gists_url": "https://api.github.com/users/zyandtom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyandtom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyandtom/subscriptions",
"organizations_url": "https://api.github.com/users/zyandtom/orgs",
"repos_url": "https://api.github.com/users/zyandtom/repos",
"events_url": "https://api.github.com/users/zyandtom/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyandtom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40781/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40780/comments | https://api.github.com/repos/huggingface/transformers/issues/40780/events | https://github.com/huggingface/transformers/pull/40780 | 3,400,537,589 | PR_kwDOCUB6oc6nt4Qc | 40,780 | Fix typos in tests and util | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T01:35:21 | 2025-09-10T12:14:55 | 2025-09-10T11:45:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40780",
"html_url": "https://github.com/huggingface/transformers/pull/40780",
"diff_url": "https://github.com/huggingface/transformers/pull/40780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40780.patch",
"merged_at": "2025-09-10T11:45:40"
} | # What does this PR do?
Fix typos in tests and util | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40780/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40779/comments | https://api.github.com/repos/huggingface/transformers/issues/40779/events | https://github.com/huggingface/transformers/issues/40779 | 3,400,477,943 | I_kwDOCUB6oc7Kryz3 | 40,779 | AttributeError: module 'torchcodec' has no attribute 'decoders' | {
"login": "MartinYTSo",
"id": 72810148,
"node_id": "MDQ6VXNlcjcyODEwMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/72810148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MartinYTSo",
"html_url": "https://github.com/MartinYTSo",
"followers_url": "https://api.github.com/users/MartinYTSo/followers",
"following_url": "https://api.github.com/users/MartinYTSo/following{/other_user}",
"gists_url": "https://api.github.com/users/MartinYTSo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MartinYTSo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MartinYTSo/subscriptions",
"organizations_url": "https://api.github.com/users/MartinYTSo/orgs",
"repos_url": "https://api.github.com/users/MartinYTSo/repos",
"events_url": "https://api.github.com/users/MartinYTSo/events{/privacy}",
"received_events_url": "https://api.github.com/users/MartinYTSo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-10T01:02:00 | 2025-09-12T10:26:01 | 2025-09-12T10:26:01 | NONE | null | null | null | null | Hey guys I'm trying to train a audio model using hugging phase pre train model `facebook/wav2vec2-base`
I've trained and loaded my best model, however when inferring the model (see code below) I get this error
```
from transformers import pipeline
classifier = pipeline("audio-classification", model="my_awesome_mind_model/checkpoint-6")
classifier(r"martin.wav")
```
`AttributeError: module 'torchcodec' has no attribute 'decoders'`
Now, I've encountered the same problem with datasets in which I had to downgrade to version 3.6.0. I suspect this is the same problem with pipeline as well.
Which version of `pipeline` was not dependent on `torchcodec`?
Transformers - 4.56.0
Python 3.11.13
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40779/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40778/comments | https://api.github.com/repos/huggingface/transformers/issues/40778/events | https://github.com/huggingface/transformers/pull/40778 | 3,400,412,309 | PR_kwDOCUB6oc6ntdP7 | 40,778 | Add Olmo3 model | {
"login": "2015aroras",
"id": 19700980,
"node_id": "MDQ6VXNlcjE5NzAwOTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/19700980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/2015aroras",
"html_url": "https://github.com/2015aroras",
"followers_url": "https://api.github.com/users/2015aroras/followers",
"following_url": "https://api.github.com/users/2015aroras/following{/other_user}",
"gists_url": "https://api.github.com/users/2015aroras/gists{/gist_id}",
"starred_url": "https://api.github.com/users/2015aroras/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/2015aroras/subscriptions",
"organizations_url": "https://api.github.com/users/2015aroras/orgs",
"repos_url": "https://api.github.com/users/2015aroras/repos",
"events_url": "https://api.github.com/users/2015aroras/events{/privacy}",
"received_events_url": "https://api.github.com/users/2015aroras/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-09-10T00:22:23 | 2025-09-16T17:05:55 | 2025-09-16T11:28:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40778",
"html_url": "https://github.com/huggingface/transformers/pull/40778",
"diff_url": "https://github.com/huggingface/transformers/pull/40778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40778.patch",
"merged_at": "2025-09-16T11:28:23"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR adds the implementation for the upcoming Olmo 3 model. The main architectural differences from Olmo 2 are:
- Sliding window attention is used for 3 out of 4 layers. RoPE scaling is not applied to sliding window attention layers.
It's possible to add sliding window functionality directly into the Olmo 2 code, so if it is preferred then I can implement it that way.
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40778/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40777/comments | https://api.github.com/repos/huggingface/transformers/issues/40777/events | https://github.com/huggingface/transformers/pull/40777 | 3,400,206,744 | PR_kwDOCUB6oc6nswY0 | 40,777 | Add FastAPI + Docker example for Transformers inference | {
"login": "sahelmain",
"id": 152058340,
"node_id": "U_kgDOCRA55A",
"avatar_url": "https://avatars.githubusercontent.com/u/152058340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sahelmain",
"html_url": "https://github.com/sahelmain",
"followers_url": "https://api.github.com/users/sahelmain/followers",
"following_url": "https://api.github.com/users/sahelmain/following{/other_user}",
"gists_url": "https://api.github.com/users/sahelmain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sahelmain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sahelmain/subscriptions",
"organizations_url": "https://api.github.com/users/sahelmain/orgs",
"repos_url": "https://api.github.com/users/sahelmain/repos",
"events_url": "https://api.github.com/users/sahelmain/events{/privacy}",
"received_events_url": "https://api.github.com/users/sahelmain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-09-09T22:45:50 | 2025-09-11T18:28:16 | 2025-09-10T11:27:46 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40777",
"html_url": "https://github.com/huggingface/transformers/pull/40777",
"diff_url": "https://github.com/huggingface/transformers/pull/40777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40777.patch",
"merged_at": null
} | # Add FastAPI + Docker deployment example for Transformers inference
This PR adds a production-ready FastAPI service example that demonstrates how to deploy Hugging Face Transformers models with Docker containerization.
## What's included:
- FastAPI service supporting any Transformers pipeline
- Docker containerization for easy deployment
- Performance optimizations (thread control, torch.compile)
- Comprehensive documentation and usage examples
- Automated tests and benchmarking
- **Proven 67% performance improvement** for single input
- **8.1% improvement** for batch inference
## Features:
- Configurable via environment variables for different models/tasks
- Support for text classification, generation, NER, and other pipeline tasks
- Health check endpoint and proper error handling
- Production deployment examples (Docker Compose, Kubernetes)
The example helps developers deploy AI models efficiently in production environments. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40777/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40776/comments | https://api.github.com/repos/huggingface/transformers/issues/40776/events | https://github.com/huggingface/transformers/issues/40776 | 3,400,112,057 | I_kwDOCUB6oc7KqZe5 | 40,776 | Add function for reversing chat templates | {
"login": "LakeYin",
"id": 14198049,
"node_id": "MDQ6VXNlcjE0MTk4MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/14198049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LakeYin",
"html_url": "https://github.com/LakeYin",
"followers_url": "https://api.github.com/users/LakeYin/followers",
"following_url": "https://api.github.com/users/LakeYin/following{/other_user}",
"gists_url": "https://api.github.com/users/LakeYin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LakeYin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LakeYin/subscriptions",
"organizations_url": "https://api.github.com/users/LakeYin/orgs",
"repos_url": "https://api.github.com/users/LakeYin/repos",
"events_url": "https://api.github.com/users/LakeYin/events{/privacy}",
"received_events_url": "https://api.github.com/users/LakeYin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-09T22:03:59 | 2025-10-17T13:51:16 | null | NONE | null | null | null | null | ### Feature request
Currently `tokenizer.apply_chat_template` can be used to convert a chat in the form list of dicts into a formatted string for text generation. However, there is currently no convenient method for doing the reverse. A new function (hypothetically called `tokenizer.parse_chat_template`) could handle this.
See also:
https://stackoverflow.com/questions/79248499/how-to-reverse-the-tokenizer-apply-chat-template-method-and-handle-streaming-r
https://stackoverflow.com/questions/79248486/how-to-reverse-the-tokenizer-apply-chat-template
### Motivation
Logically if there is a function for converting a chat into a string, there should be a function that does the reverse, which would make things easier when working with `AutoModelForCausalLM` with chat models. This feature is already implemented in the `pipeline` API, so it should be straightforward to create a version that's exposed to end users.
### Your contribution
N/A | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40776/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40776/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40775/comments | https://api.github.com/repos/huggingface/transformers/issues/40775/events | https://github.com/huggingface/transformers/pull/40775 | 3,399,971,604 | PR_kwDOCUB6oc6nr7x1 | 40,775 | Fix condition for emitting warning when generation exceeds max model length | {
"login": "yannicks1",
"id": 43552841,
"node_id": "MDQ6VXNlcjQzNTUyODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/43552841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yannicks1",
"html_url": "https://github.com/yannicks1",
"followers_url": "https://api.github.com/users/yannicks1/followers",
"following_url": "https://api.github.com/users/yannicks1/following{/other_user}",
"gists_url": "https://api.github.com/users/yannicks1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yannicks1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yannicks1/subscriptions",
"organizations_url": "https://api.github.com/users/yannicks1/orgs",
"repos_url": "https://api.github.com/users/yannicks1/repos",
"events_url": "https://api.github.com/users/yannicks1/events{/privacy}",
"received_events_url": "https://api.github.com/users/yannicks1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T21:13:56 | 2025-09-22T12:21:39 | 2025-09-22T12:21:39 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40775",
"html_url": "https://github.com/huggingface/transformers/pull/40775",
"diff_url": "https://github.com/huggingface/transformers/pull/40775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40775.patch",
"merged_at": "2025-09-22T12:21:39"
} | ## Fix condition for emitting warning when generation exceeds max model length
### Purpose
The current transformers version prints inconsistent warnings when exceeding the max model length L.
For prefills of a prompt of max model length L (max_position_embeddings) no warning is printed. This is expected, as the positional embedding of the (L + 1)-th token, which exceeds the limit, would only be needed when doing another model forward pass to generate the (L + 2)-th token.
However, when doing decodes with a context length of max model length L (max_position_embeddings) the warning comes one token too early.
Example: doing a prefill of (L - 1) tokens and requesting 2 tokens will generate the L-th token (on context length L-1) and the (L + 1)-th token (on context length L), which is perfectly legitimate without exceeding the limit max_position_embeddings. Nevertheless, the current transformer version does emit the warning:
```
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (L). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
```
This PR adjusts the limit for which this warning is logged and makes it consistently correct across prefills and decodes.
Specifically, the PR targets the general case where the prompt is of length (L - K) and the number of new tokens are (K + 1).
### Testing
The behavior can be tested with the following script.
Note the different input_output (prompt length, max new tokens) combinations:
The effect of the PR can be observed for for L = 2048 and K = 1 -> input_output = (L - K, K + 1) = (2047, 2). Without the change in this PR the warning is logged (wrong), but with the change is not emitted (correct).
The rest of the the test cases below are untouched by the change.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "JackFram/llama-160m"
input_output = (2048, 1) # warning before PR NO / after PR NO
# input_output = (2048, 2) # warning before PR YES / after PR YES
# input_output = (2047, 1) # warning before PR NO / after PR NO
# input_output = (2047, 2) # warning before PR YES / after PR NO <--- EFFECT OF THIS PR
# input_output = (2047, 3) # warning before PR YES / after PR YES
prompt_len, max_new_tokens = input_output
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = '1 0 '* (2048//4)
tokenized_prompt = tokenizer([prompt])["input_ids"][0][:prompt_len]
assert len(tokenized_prompt)==prompt_len
hf_input_tokens = torch.tensor(tokenized_prompt).unsqueeze(0)
print('input shape:', hf_input_tokens.shape)
hf_output = model.generate(hf_input_tokens,
do_sample=False,
min_new_tokens=max_new_tokens,
max_new_tokens=max_new_tokens,
return_dict_in_generate=True,
output_scores=True)
print('ouput shape ', hf_output.sequences.shape)
# decode output tokens after first removing input tokens (prompt)
hf_generated_tokens = hf_output.sequences[:, len(hf_input_tokens[0]):]
print(hf_generated_tokens.shape)
hf_generated_text = tokenizer.batch_decode(hf_generated_tokens)[0]
print('generated', len(hf_generated_tokens[0]), 'HF tokens')
print(f"\ngenerated tokens:\n {hf_generated_tokens!r}\n")
print(f"\ngenerated text:\n {hf_generated_text!r}\n")
print("-----------------------------------")
``` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40775/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40774/comments | https://api.github.com/repos/huggingface/transformers/issues/40774/events | https://github.com/huggingface/transformers/pull/40774 | 3,399,557,423 | PR_kwDOCUB6oc6nqiaK | 40,774 | [torchao safetensors] renaming get_state_dict function | {
"login": "liangel-02",
"id": 224883113,
"node_id": "U_kgDODWdxqQ",
"avatar_url": "https://avatars.githubusercontent.com/u/224883113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liangel-02",
"html_url": "https://github.com/liangel-02",
"followers_url": "https://api.github.com/users/liangel-02/followers",
"following_url": "https://api.github.com/users/liangel-02/following{/other_user}",
"gists_url": "https://api.github.com/users/liangel-02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liangel-02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liangel-02/subscriptions",
"organizations_url": "https://api.github.com/users/liangel-02/orgs",
"repos_url": "https://api.github.com/users/liangel-02/repos",
"events_url": "https://api.github.com/users/liangel-02/events{/privacy}",
"received_events_url": "https://api.github.com/users/liangel-02/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T18:48:49 | 2025-09-17T14:47:07 | 2025-09-17T09:20:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40774",
"html_url": "https://github.com/huggingface/transformers/pull/40774",
"diff_url": "https://github.com/huggingface/transformers/pull/40774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40774.patch",
"merged_at": "2025-09-17T09:20:51"
} | **Summary**
Renaming `get_state_dict()` method in `hf_quantizer` to `get_state_dict_and_metadata()` and updating functionality to return metadata along with state_dict.
This reduces confusion related to enabling torchao safetensor support built out in [this PR](https://github.com/huggingface/transformers/pull/40735).
**Test Plan**
`python -m pytest ./tests/quantization` | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40774/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40774/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40773/comments | https://api.github.com/repos/huggingface/transformers/issues/40773/events | https://github.com/huggingface/transformers/pull/40773 | 3,399,362,952 | PR_kwDOCUB6oc6np5Nc | 40,773 | Move num_items_in_batch to correct device before accelerator.gather | {
"login": "ssharpe42",
"id": 8136905,
"node_id": "MDQ6VXNlcjgxMzY5MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssharpe42",
"html_url": "https://github.com/ssharpe42",
"followers_url": "https://api.github.com/users/ssharpe42/followers",
"following_url": "https://api.github.com/users/ssharpe42/following{/other_user}",
"gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions",
"organizations_url": "https://api.github.com/users/ssharpe42/orgs",
"repos_url": "https://api.github.com/users/ssharpe42/repos",
"events_url": "https://api.github.com/users/ssharpe42/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssharpe42/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T17:38:50 | 2025-09-10T16:49:43 | 2025-09-10T16:49:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40773",
"html_url": "https://github.com/huggingface/transformers/pull/40773",
"diff_url": "https://github.com/huggingface/transformers/pull/40773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40773.patch",
"merged_at": "2025-09-10T16:49:43"
} | # What does this PR do?
`num_items_in_batch` ends up being on CPU when in multi-gpu ddp setting. When trying to compute the total number of items across devices, this fails as seen in #39896. This PR moves it to the correct device before gathering across gpus.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #39896
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zach-huggingface, @SunMarc and @qgallouedec
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40773/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40772/comments | https://api.github.com/repos/huggingface/transformers/issues/40772/events | https://github.com/huggingface/transformers/pull/40772 | 3,399,066,018 | PR_kwDOCUB6oc6no_hu | 40,772 | [generate] Always use decoder config to init cache | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T16:04:49 | 2025-09-12T16:39:49 | 2025-09-12T16:24:22 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40772",
"html_url": "https://github.com/huggingface/transformers/pull/40772",
"diff_url": "https://github.com/huggingface/transformers/pull/40772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40772.patch",
"merged_at": "2025-09-12T16:24:22"
} | # What does this PR do?
(see title)
Fixes #40644 | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40772/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40771/comments | https://api.github.com/repos/huggingface/transformers/issues/40771/events | https://github.com/huggingface/transformers/pull/40771 | 3,398,661,628 | PR_kwDOCUB6oc6nnoEb | 40,771 | Adding Support for Qwen3-Next | {
"login": "bozheng-hit",
"id": 8787969,
"node_id": "MDQ6VXNlcjg3ODc5Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8787969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bozheng-hit",
"html_url": "https://github.com/bozheng-hit",
"followers_url": "https://api.github.com/users/bozheng-hit/followers",
"following_url": "https://api.github.com/users/bozheng-hit/following{/other_user}",
"gists_url": "https://api.github.com/users/bozheng-hit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bozheng-hit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bozheng-hit/subscriptions",
"organizations_url": "https://api.github.com/users/bozheng-hit/orgs",
"repos_url": "https://api.github.com/users/bozheng-hit/repos",
"events_url": "https://api.github.com/users/bozheng-hit/events{/privacy}",
"received_events_url": "https://api.github.com/users/bozheng-hit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T14:08:53 | 2025-09-23T15:06:45 | 2025-09-09T21:46:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40771",
"html_url": "https://github.com/huggingface/transformers/pull/40771",
"diff_url": "https://github.com/huggingface/transformers/pull/40771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40771.patch",
"merged_at": "2025-09-09T21:46:57"
} | # Adding Support for Qwen3-Next
This PR adds the support of codes for the upcoming Qwen3-Next models. For information about Qwen, please visit:
👉 https://github.com/QwenLM/Qwen3
Special thanks to @Cyrilvallez and @ArthurZucker for their valuable feedback and thorough review of this PR! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40771/reactions",
"total_count": 111,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 111,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40771/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40770/comments | https://api.github.com/repos/huggingface/transformers/issues/40770/events | https://github.com/huggingface/transformers/issues/40770 | 3,398,656,533 | I_kwDOCUB6oc7Kk2IV | 40,770 | Sam2VideoProcessor throws tensor size mismatch when adding multiple input boxes without input_labels | {
"login": "DanForester",
"id": 160538929,
"node_id": "U_kgDOCZGhMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/160538929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanForester",
"html_url": "https://github.com/DanForester",
"followers_url": "https://api.github.com/users/DanForester/followers",
"following_url": "https://api.github.com/users/DanForester/following{/other_user}",
"gists_url": "https://api.github.com/users/DanForester/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanForester/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanForester/subscriptions",
"organizations_url": "https://api.github.com/users/DanForester/orgs",
"repos_url": "https://api.github.com/users/DanForester/repos",
"events_url": "https://api.github.com/users/DanForester/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanForester/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-09T14:07:40 | 2025-09-11T10:48:08 | 2025-09-11T10:48:08 | NONE | null | null | null | null | ### System Info
--- System Info ---
Platform: Linux-6.12.10-76061203-generic-x86_64-with-glibc2.35
Python: 3.11.13 | packaged by conda-forge | (main, Jun 4 2025, 14:48:23) [GCC 13.3.0]
--- PyTorch and CUDA Info ---
PyTorch Version: 2.7.1+cu128
Is CUDA available: True
CUDA Version: 12.8
cuDNN Version: 90701
GPU Name: NVIDIA GeForce RTX 5060 Ti
--- Transformers Info ---
Transformers Version: 4.57.0.dev0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model_id = "facebook/sam2.1-hiera-large"
model = Sam2VideoModel.from_pretrained(model_id).to(device, dtype=torch.bfloat16)
processor = Sam2VideoProcessor.from_pretrained(model_id)
[..]
input_boxes = [[box1, box2]] # [image][box][coords]
obj_ids = [1, 2]
processor.add_inputs_to_inference_session(
inference_session=inference_session,
frame_idx=0,
obj_ids=obj_ids,
input_boxes=input_boxes,
)
outputs = model(
inference_session=inference_session,
frame_idx=0,
)
### Expected behavior
When I try to add multiple input_boxes to a single frame using add_inputs_to_inference_session, without specifying input_labels, I get the following error:
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 1 but got size 2 for tensor number 1 in the list.
This seems to originate from processing_sam2_video.py, line 727:
input_labels = torch.cat([box_labels, input_labels], dim=2)
Even though I do not pass input_labels, the processor attempts to concatenate box_labels with input_labels, which is undefined or mismatched.
| {
"login": "DanForester",
"id": 160538929,
"node_id": "U_kgDOCZGhMQ",
"avatar_url": "https://avatars.githubusercontent.com/u/160538929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanForester",
"html_url": "https://github.com/DanForester",
"followers_url": "https://api.github.com/users/DanForester/followers",
"following_url": "https://api.github.com/users/DanForester/following{/other_user}",
"gists_url": "https://api.github.com/users/DanForester/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanForester/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanForester/subscriptions",
"organizations_url": "https://api.github.com/users/DanForester/orgs",
"repos_url": "https://api.github.com/users/DanForester/repos",
"events_url": "https://api.github.com/users/DanForester/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanForester/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40770/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40769/comments | https://api.github.com/repos/huggingface/transformers/issues/40769/events | https://github.com/huggingface/transformers/pull/40769 | 3,398,539,404 | PR_kwDOCUB6oc6nnNZM | 40,769 | Draft v5 :) | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T13:37:49 | 2025-10-03T14:19:31 | 2025-10-03T14:19:31 | COLLABORATOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40769",
"html_url": "https://github.com/huggingface/transformers/pull/40769",
"diff_url": "https://github.com/huggingface/transformers/pull/40769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40769.patch",
"merged_at": null
} | # What does this PR do?
Transformers v5 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40769/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/40769/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40768/comments | https://api.github.com/repos/huggingface/transformers/issues/40768/events | https://github.com/huggingface/transformers/issues/40768 | 3,398,465,371 | I_kwDOCUB6oc7KkHdb | 40,768 | max_length ignored in summarization pipeline, overridden by default max_new_tokens | {
"login": "noah-13",
"id": 84189001,
"node_id": "MDQ6VXNlcjg0MTg5MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/84189001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noah-13",
"html_url": "https://github.com/noah-13",
"followers_url": "https://api.github.com/users/noah-13/followers",
"following_url": "https://api.github.com/users/noah-13/following{/other_user}",
"gists_url": "https://api.github.com/users/noah-13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noah-13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noah-13/subscriptions",
"organizations_url": "https://api.github.com/users/noah-13/orgs",
"repos_url": "https://api.github.com/users/noah-13/repos",
"events_url": "https://api.github.com/users/noah-13/events{/privacy}",
"received_events_url": "https://api.github.com/users/noah-13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-09T13:19:49 | 2025-09-24T11:54:57 | 2025-09-24T11:54:57 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.56.1
- Platform: Linux-6.8.0-79-generic-x86_64-with-glibc2.39
- Python version: 3.11.13
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.3.1+cu118 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce GTX 1080 Ti
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Bug description
When using the summarization pipeline, setting only `max_length` (and `min_length`) still triggers a warning about `max_new_tokens`.
More importantly, the output length follows `max_new_tokens` (=256) instead of my specified `max_length` (=64).
---
### Reproduction
```python
from transformers import pipeline
from transformers import AutoTokenizer
summarizer = pipeline("summarization", model="Falconsai/text_summarization", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("Falconsai/text_summarization")
outputs = summarizer(
["This is a long article about ..."],
max_length=64,
min_length=10,
do_sample=False,
)
print(outputs)
print(len(tokenizer.encode(text, add_special_tokens=False)))
```
### Expected behavior
Output length should respect max_length=64.
### Actual behavior
Warning appears:
```
Both `max_new_tokens` (=256) and `max_length`(=64) seem to have been set.
`max_new_tokens` will take precedence.
```
Output length matches max_new_tokens=256, not my specified max_length.
### Expected behavior
Output length should respect max_length=64. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40768/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40767/comments | https://api.github.com/repos/huggingface/transformers/issues/40767/events | https://github.com/huggingface/transformers/issues/40767 | 3,398,451,512 | I_kwDOCUB6oc7KkEE4 | 40,767 | 3D Object Detection Models | {
"login": "SeucheAchat9115",
"id": 65967380,
"node_id": "MDQ6VXNlcjY1OTY3Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/65967380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeucheAchat9115",
"html_url": "https://github.com/SeucheAchat9115",
"followers_url": "https://api.github.com/users/SeucheAchat9115/followers",
"following_url": "https://api.github.com/users/SeucheAchat9115/following{/other_user}",
"gists_url": "https://api.github.com/users/SeucheAchat9115/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeucheAchat9115/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeucheAchat9115/subscriptions",
"organizations_url": "https://api.github.com/users/SeucheAchat9115/orgs",
"repos_url": "https://api.github.com/users/SeucheAchat9115/repos",
"events_url": "https://api.github.com/users/SeucheAchat9115/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeucheAchat9115/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-09-09T13:16:33 | 2025-09-15T23:12:16 | null | NONE | null | null | null | null | ### Model description
Hi together,
is there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.
Thanks
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
BEVFormer:
https://github.com/fundamentalvision/BEVFormer | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40767/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40766/comments | https://api.github.com/repos/huggingface/transformers/issues/40766/events | https://github.com/huggingface/transformers/pull/40766 | 3,398,205,012 | PR_kwDOCUB6oc6nmJ1- | 40,766 | Fix config dtype parsing for Emu3 edge case | {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-09-09T12:24:12 | 2025-09-12T13:07:45 | 2025-09-11T08:26:45 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40766",
"html_url": "https://github.com/huggingface/transformers/pull/40766",
"diff_url": "https://github.com/huggingface/transformers/pull/40766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40766.patch",
"merged_at": "2025-09-11T08:26:45"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/vllm-project/vllm/pull/24447#issuecomment-3268847418
- "dtype" is one of token in `Emu3Config`'s `vocabulary_map`, so current config `dtype` parsing logic can run into issue for vocab_map `{'dtype': 47727}`.
- This PR makes the dtype parsing logic more robust.
**Reproduction code**
```python3
from transformers import AutoConfig
config = AutoConfig.from_pretrained("BAAI/Emu3-Chat-hf")
print(config)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40766/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40765/comments | https://api.github.com/repos/huggingface/transformers/issues/40765/events | https://github.com/huggingface/transformers/pull/40765 | 3,398,111,891 | PR_kwDOCUB6oc6nl38b | 40,765 | Adds Causal Conv 1D kernel for mamba models | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T12:02:12 | 2025-09-12T10:22:48 | 2025-09-12T10:22:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40765",
"html_url": "https://github.com/huggingface/transformers/pull/40765",
"diff_url": "https://github.com/huggingface/transformers/pull/40765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40765.patch",
"merged_at": "2025-09-12T10:22:26"
} | # What does this PR do?
Adds the https://huggingface.co/kernels-community/causal-conv1d to the mamba model.
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40765/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40765/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40764/comments | https://api.github.com/repos/huggingface/transformers/issues/40764/events | https://github.com/huggingface/transformers/pull/40764 | 3,397,930,781 | PR_kwDOCUB6oc6nlSRb | 40,764 | Replace image classification loss functions to `self.loss_function` | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T11:08:21 | 2025-09-12T11:59:37 | 2025-09-12T11:59:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40764",
"html_url": "https://github.com/huggingface/transformers/pull/40764",
"diff_url": "https://github.com/huggingface/transformers/pull/40764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40764.patch",
"merged_at": "2025-09-12T11:59:37"
} | # What does this PR do?
Replaces image classification explicit loss functions to a `self.loss_function` | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40764/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40763/comments | https://api.github.com/repos/huggingface/transformers/issues/40763/events | https://github.com/huggingface/transformers/pull/40763 | 3,397,443,172 | PR_kwDOCUB6oc6njmd9 | 40,763 | Fix: swanlab `public.cloud.experiment_url` api error | {
"login": "Zeyi-Lin",
"id": 58305964,
"node_id": "MDQ6VXNlcjU4MzA1OTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/58305964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zeyi-Lin",
"html_url": "https://github.com/Zeyi-Lin",
"followers_url": "https://api.github.com/users/Zeyi-Lin/followers",
"following_url": "https://api.github.com/users/Zeyi-Lin/following{/other_user}",
"gists_url": "https://api.github.com/users/Zeyi-Lin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zeyi-Lin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zeyi-Lin/subscriptions",
"organizations_url": "https://api.github.com/users/Zeyi-Lin/orgs",
"repos_url": "https://api.github.com/users/Zeyi-Lin/repos",
"events_url": "https://api.github.com/users/Zeyi-Lin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zeyi-Lin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T09:11:06 | 2025-09-09T09:28:45 | 2025-09-09T09:28:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40763",
"html_url": "https://github.com/huggingface/transformers/pull/40763",
"diff_url": "https://github.com/huggingface/transformers/pull/40763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40763.patch",
"merged_at": "2025-09-09T09:28:13"
} | # What does this PR do?
👋 It is a tiny fix. This PR fixes an API error in a swanlab integration by changing `public.cloud.exp_url` to the correct `public.cloud.experiment_url`.
## Who can review?
❤️ Really hope @SunMarc can take a look at this PR. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40763/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40762/comments | https://api.github.com/repos/huggingface/transformers/issues/40762/events | https://github.com/huggingface/transformers/pull/40762 | 3,397,403,251 | PR_kwDOCUB6oc6njdrF | 40,762 | Adapt NPU fusion operators to Qwen2.5-VL model | {
"login": "frozenleaves",
"id": 46097299,
"node_id": "MDQ6VXNlcjQ2MDk3Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/46097299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frozenleaves",
"html_url": "https://github.com/frozenleaves",
"followers_url": "https://api.github.com/users/frozenleaves/followers",
"following_url": "https://api.github.com/users/frozenleaves/following{/other_user}",
"gists_url": "https://api.github.com/users/frozenleaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frozenleaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frozenleaves/subscriptions",
"organizations_url": "https://api.github.com/users/frozenleaves/orgs",
"repos_url": "https://api.github.com/users/frozenleaves/repos",
"events_url": "https://api.github.com/users/frozenleaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/frozenleaves/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-09T09:02:06 | 2025-09-09T11:22:31 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40762",
"html_url": "https://github.com/huggingface/transformers/pull/40762",
"diff_url": "https://github.com/huggingface/transformers/pull/40762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40762.patch",
"merged_at": null
} | This PR is aimed at adapting the Ascend NPU fusion operators for the `Qwen2.5-VL` series models to enhance their performance on the NPU. Currently, this PR only makes **non-compatible modifications for the NPU** and does not meet the conditions for merging into the mainline. **It is only applicable to the joint innovation version for restricted customers**. If any bugs are encountered, communication and updates can be made at any time.
The modifications in this PR include enabling support for three NPU fusion operators: NPU Flash Attention, NPU RMSNorm, and NPU SwiGLU. Currently, it **only supports the Qwen2.5-VL series models**, and large-scale verification has only been conducted on the Qwen2.5-VL-7B model. Other series of the Qwen2.5-VL models (such as 32B) have not been verified, and accuracy and performance are not guaranteed. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40762/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40761/comments | https://api.github.com/repos/huggingface/transformers/issues/40761/events | https://github.com/huggingface/transformers/pull/40761 | 3,397,371,228 | PR_kwDOCUB6oc6njWqb | 40,761 | [Docs] Add missing class documentation for optimizer_schedules (#31870, #23010) | {
"login": "jijihuny",
"id": 112816117,
"node_id": "U_kgDOBrlv9Q",
"avatar_url": "https://avatars.githubusercontent.com/u/112816117?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jijihuny",
"html_url": "https://github.com/jijihuny",
"followers_url": "https://api.github.com/users/jijihuny/followers",
"following_url": "https://api.github.com/users/jijihuny/following{/other_user}",
"gists_url": "https://api.github.com/users/jijihuny/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jijihuny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jijihuny/subscriptions",
"organizations_url": "https://api.github.com/users/jijihuny/orgs",
"repos_url": "https://api.github.com/users/jijihuny/repos",
"events_url": "https://api.github.com/users/jijihuny/events{/privacy}",
"received_events_url": "https://api.github.com/users/jijihuny/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-09T08:54:39 | 2025-09-10T21:58:21 | 2025-09-10T21:58:21 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40761",
"html_url": "https://github.com/huggingface/transformers/pull/40761",
"diff_url": "https://github.com/huggingface/transformers/pull/40761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40761.patch",
"merged_at": "2025-09-10T21:58:21"
} | # What does this PR do?
I found some missing infos in [doc for Optimization classes](https://huggingface.co/docs/transformers/main_classes/optimizer_schedules)
From #31870, #23010, new optimizer schedules were merged but these features aren't included at the document yet.
So simply I just included those classes that's all.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40761/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40760/comments | https://api.github.com/repos/huggingface/transformers/issues/40760/events | https://github.com/huggingface/transformers/pull/40760 | 3,397,279,517 | PR_kwDOCUB6oc6njCsX | 40,760 | 🚨🚨🚨 Fully remove Tensorflow and Jax support library-wide | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-09-09T08:32:43 | 2025-09-18T16:27:41 | 2025-09-18T16:27:39 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40760",
"html_url": "https://github.com/huggingface/transformers/pull/40760",
"diff_url": "https://github.com/huggingface/transformers/pull/40760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40760.patch",
"merged_at": "2025-09-18T16:27:39"
} | # What does this PR do?
Apart from obvious tf/jax support, I believe the following should be the only potential breaking changes to torch-only code:
- pipelines do not take `framework` argument anymore
- onnx config methods do not take `framework` argument anymore
It may break current torch code if users do `framework="pt"` explicitly, but it's a necessary change. It makes no sense to keep those arguments, as the only framework working for those objects is now `torch`. Would be weird to keep it only for BC, as we are breaking the support anyway.
Note: I did not remove traces of tensorflow/jax in docs `.md` (markdown) files for now, as this PR is already enormous. It's a very tedious task, and moreover a lot of doc is written in another alphabet that I cannot read at all. Will be done in a subsequent PR, hopefully with the help of AI (should be a perfect fit for that)
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40760/reactions",
"total_count": 20,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 8,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40760/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40759/comments | https://api.github.com/repos/huggingface/transformers/issues/40759/events | https://github.com/huggingface/transformers/pull/40759 | 3,396,335,804 | PR_kwDOCUB6oc6nf1ky | 40,759 | feat: add qwen3 pruning support | {
"login": "wangwenmingaa",
"id": 30922691,
"node_id": "MDQ6VXNlcjMwOTIyNjkx",
"avatar_url": "https://avatars.githubusercontent.com/u/30922691?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangwenmingaa",
"html_url": "https://github.com/wangwenmingaa",
"followers_url": "https://api.github.com/users/wangwenmingaa/followers",
"following_url": "https://api.github.com/users/wangwenmingaa/following{/other_user}",
"gists_url": "https://api.github.com/users/wangwenmingaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangwenmingaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangwenmingaa/subscriptions",
"organizations_url": "https://api.github.com/users/wangwenmingaa/orgs",
"repos_url": "https://api.github.com/users/wangwenmingaa/repos",
"events_url": "https://api.github.com/users/wangwenmingaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangwenmingaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-09T02:42:49 | 2025-09-12T07:47:07 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40759",
"html_url": "https://github.com/huggingface/transformers/pull/40759",
"diff_url": "https://github.com/huggingface/transformers/pull/40759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40759.patch",
"merged_at": null
} | # What does this PR do?
This PR enables models with structured pruning to be loaded correctly. The model configuration files after structured pruning will be modified, as the layer_head_num and layer_inter_size parameters may vary across different layers of the network. When the Transformers library loads the configuration file, it must now map these parameters layer-by-layer. This PR specifically adjusts the Qwen3 configuration loading logic in Transformers to ensure compatibility with models that have undergone structured pruning.
## Before submitting
1. Verify that the model after structured pruning can be loaded correctly by the Transformers library.
2. Confirm that unpruned models can still be loaded normally by the Transformers library. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40759/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40758/comments | https://api.github.com/repos/huggingface/transformers/issues/40758/events | https://github.com/huggingface/transformers/pull/40758 | 3,394,709,481 | PR_kwDOCUB6oc6naU8B | 40,758 | docs: add continuous batching to serving | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-08T15:34:10 | 2025-09-08T15:51:08 | 2025-09-08T15:50:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40758",
"html_url": "https://github.com/huggingface/transformers/pull/40758",
"diff_url": "https://github.com/huggingface/transformers/pull/40758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40758.patch",
"merged_at": "2025-09-08T15:50:28"
} | Adding a description of continuous batching in serving docs | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40758/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40757/comments | https://api.github.com/repos/huggingface/transformers/issues/40757/events | https://github.com/huggingface/transformers/pull/40757 | 3,394,640,576 | PR_kwDOCUB6oc6naF3Q | 40,757 | Fix Llama4 hidden_states/attentions passthrough with getattr fallback | {
"login": "moonrunnerkc",
"id": 125813226,
"node_id": "U_kgDOB3_B6g",
"avatar_url": "https://avatars.githubusercontent.com/u/125813226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moonrunnerkc",
"html_url": "https://github.com/moonrunnerkc",
"followers_url": "https://api.github.com/users/moonrunnerkc/followers",
"following_url": "https://api.github.com/users/moonrunnerkc/following{/other_user}",
"gists_url": "https://api.github.com/users/moonrunnerkc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moonrunnerkc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moonrunnerkc/subscriptions",
"organizations_url": "https://api.github.com/users/moonrunnerkc/orgs",
"repos_url": "https://api.github.com/users/moonrunnerkc/repos",
"events_url": "https://api.github.com/users/moonrunnerkc/events{/privacy}",
"received_events_url": "https://api.github.com/users/moonrunnerkc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-08T15:14:37 | 2025-09-08T16:16:53 | 2025-09-08T16:16:52 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40757",
"html_url": "https://github.com/huggingface/transformers/pull/40757",
"diff_url": "https://github.com/huggingface/transformers/pull/40757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40757.patch",
"merged_at": null
} | Fixes #40754
This PR addresses a mismatch in `Llama4ForCausalLM` where the output of `Llama4TextModel` did not always provide `hidden_states` or `attentions`.
The fix replaces direct attribute access with `getattr(..., None)` to prevent crashes when these fields are missing.
What was changed:
- Updated `modeling_llama4.py` to safely handle optional `hidden_states` and `attentions`.
- Added a dedicated test suite `test_modeling_llama4_outputs.py` covering:
- Missing `hidden_states`
- Missing `attentions`
- Integration with `CausalLMOutputWithPast`
Why this is effective:
- Prevents `AttributeError` crashes in real usage.
- Ensures outputs remain consistent regardless of which fields are present.
- Lightweight, defensive, and fully backward-compatible.
✅ All new tests pass locally (3/3).
✅ Existing tests unaffected.
✅ Implementation is clean, maintainable, and aligned with Hugging Face style.
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40757/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40756/comments | https://api.github.com/repos/huggingface/transformers/issues/40756/events | https://github.com/huggingface/transformers/pull/40756 | 3,394,269,704 | PR_kwDOCUB6oc6nY2Yw | 40,756 | [WIP] Add Canary | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-08T13:36:18 | 2025-09-08T13:45:55 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40756",
"html_url": "https://github.com/huggingface/transformers/pull/40756",
"diff_url": "https://github.com/huggingface/transformers/pull/40756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40756.patch",
"merged_at": null
} | # What does this PR do?
For now, a simple draft of converted [canary-1b-v2](https://huggingface.co/nvidia/canary-1b-v2)'s tokenizer:
```python
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("eustlb/canary-1b-v2")
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40756/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40755/comments | https://api.github.com/repos/huggingface/transformers/issues/40755/events | https://github.com/huggingface/transformers/pull/40755 | 3,394,149,491 | PR_kwDOCUB6oc6nYcIg | 40,755 | [TimesFM] Add support for forecasting with covariates | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-08T13:05:54 | 2025-10-21T08:52:03 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40755",
"html_url": "https://github.com/huggingface/transformers/pull/40755",
"diff_url": "https://github.com/huggingface/transformers/pull/40755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40755.patch",
"merged_at": null
} | # What does this PR do?
Add support for forecasting with Covariates | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40755/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40754/comments | https://api.github.com/repos/huggingface/transformers/issues/40754/events | https://github.com/huggingface/transformers/issues/40754 | 3,394,009,625 | I_kwDOCUB6oc7KTHoZ | 40,754 | Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output? | {
"login": "st143575",
"id": 19676140,
"node_id": "MDQ6VXNlcjE5Njc2MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/19676140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st143575",
"html_url": "https://github.com/st143575",
"followers_url": "https://api.github.com/users/st143575/followers",
"following_url": "https://api.github.com/users/st143575/following{/other_user}",
"gists_url": "https://api.github.com/users/st143575/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st143575/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st143575/subscriptions",
"organizations_url": "https://api.github.com/users/st143575/orgs",
"repos_url": "https://api.github.com/users/st143575/repos",
"events_url": "https://api.github.com/users/st143575/events{/privacy}",
"received_events_url": "https://api.github.com/users/st143575/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-08T12:31:39 | 2025-09-16T19:25:03 | 2025-09-11T08:39:06 | NONE | null | null | null | null | ### System Info
**System Info**
- `transformers` version: 4.55.4
- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41
- Python version: 3.13.5
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
### Who can help?
@ArthurZucker
@amyeroberts
@qubvel
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**Task Detail**
Obtaining hidden_states from the outputs of Llama4ForCausalLM
**Problem**
In the source code [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py), the outputs of Llama4ForCausalLM contains a *hidden_states* (See [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642)), which is assigned with *outputs.hidden_states*. Here, the *outputs* is the output of Llama4TextModel (See [line 619](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L619C9-L619C16)). However, the output of Llama4TextModel consists of a *last_hidden_state* (assigned the value of *hidden_states*) and a *past_key_values*, but no *hidden_states* (See [line 554-557](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L554-L557)).
Thus, I'm wondering if there is either a typo in [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642) where the *hidden_states=outputs.hidden_states* should be replaced by *hidden_states=outputs.last_hidden_state*, or a typo in [line 555](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L555C13-L555C45) where the *last_hidden_state=hidden_states* should be replaced by *hidden_states=hidden_states*?
Thank you for your patience!
### Expected behavior
An explanation or a correction of the source code in [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py) | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40754/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40753/comments | https://api.github.com/repos/huggingface/transformers/issues/40753/events | https://github.com/huggingface/transformers/pull/40753 | 3,393,579,070 | PR_kwDOCUB6oc6nWgCL | 40,753 | Fix order of mask functions when using `and/or_mask_function` | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-08T10:22:17 | 2025-09-08T10:31:46 | 2025-09-08T10:31:43 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40753",
"html_url": "https://github.com/huggingface/transformers/pull/40753",
"diff_url": "https://github.com/huggingface/transformers/pull/40753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40753.patch",
"merged_at": "2025-09-08T10:31:43"
} | # What does this PR do?
If using `and/or_mask_function` arguments in `create_causal_mask`, it is important to apply those masks before any other deviations such as packed sequence, padding etc, as users expect to slightly deviate from the pattern, but not to take such internals into account!
Also take the opportunity to use `getattr` instead of direct object reference for the `is_compileable` flag of the Cache, so that non-standard caches don't need to have the class attribute
Merging as I own the masking code anyway, but cc @ArthurZucker for viz!
Also cc @vasqu for viz, it's what I said the other day when talking about the Embedding Gemma integration! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40753/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40753/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40752/comments | https://api.github.com/repos/huggingface/transformers/issues/40752/events | https://github.com/huggingface/transformers/issues/40752 | 3,393,471,719 | I_kwDOCUB6oc7KRETn | 40,752 | How to extract attention weights for the first generated token? | {
"login": "VincentLHH",
"id": 43716207,
"node_id": "MDQ6VXNlcjQzNzE2MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/43716207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VincentLHH",
"html_url": "https://github.com/VincentLHH",
"followers_url": "https://api.github.com/users/VincentLHH/followers",
"following_url": "https://api.github.com/users/VincentLHH/following{/other_user}",
"gists_url": "https://api.github.com/users/VincentLHH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VincentLHH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VincentLHH/subscriptions",
"organizations_url": "https://api.github.com/users/VincentLHH/orgs",
"repos_url": "https://api.github.com/users/VincentLHH/repos",
"events_url": "https://api.github.com/users/VincentLHH/events{/privacy}",
"received_events_url": "https://api.github.com/users/VincentLHH/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-08T09:53:16 | 2025-09-08T11:41:22 | 2025-09-08T11:40:19 | NONE | null | null | null | null | **Title:** Request for clarification: How to extract attention weights for the first generated token?
**Description:**
Hi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsistent behavior in the shape of `attentions` returned by `model.generate(..., output_attentions=True)`.
Here's what I found:
- For `step 0` (the first generation step), `attentions[0][layer].shape` is `(batch, heads, seq_len, seq_len)` — e.g., `[1, 16, 1178, 1178]`, where `seq_len` equals the input prompt length.
- This appears to be the **full self-attention matrix of the prompt context**, not the attention of the newly generated token.
- Starting from `step 1`, the shape becomes `(batch, heads, 1, ctx_len)`, which correctly represents the attention of a single generated token.
**Question:**
- Is there a way to directly extract the attention weights **from the first generated token** (i.e., the query of the first new token attending to the prompt keys)?
- Or is the intended behavior to use the last position of the context attention (i.e., `attentions[0][layer][..., -1, :]`) as a proxy for the generation decision?
**Use Case:**
I want to interpret which parts of the input prompt the model attends to when generating the first output token, for interpretability and analysis purposes.
**Environment:**
- Transformers version: [4.51.3]
- Model: [Qwen3]
- Code snippet:
```python
outputs = model.generate(
input_ids,
output_attentions=True,
return_dict_in_generate=True
)
# outputs.attentions[0][layer] has shape (1, 16, 1178, 1178) | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40752/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40751/comments | https://api.github.com/repos/huggingface/transformers/issues/40751/events | https://github.com/huggingface/transformers/issues/40751 | 3,393,161,964 | I_kwDOCUB6oc7KP4rs | 40,751 | "max_length" in pipeline("text-generation", model="gpt2") does not work | {
"login": "wanghao16020510036",
"id": 46617730,
"node_id": "MDQ6VXNlcjQ2NjE3NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/46617730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wanghao16020510036",
"html_url": "https://github.com/wanghao16020510036",
"followers_url": "https://api.github.com/users/wanghao16020510036/followers",
"following_url": "https://api.github.com/users/wanghao16020510036/following{/other_user}",
"gists_url": "https://api.github.com/users/wanghao16020510036/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wanghao16020510036/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanghao16020510036/subscriptions",
"organizations_url": "https://api.github.com/users/wanghao16020510036/orgs",
"repos_url": "https://api.github.com/users/wanghao16020510036/repos",
"events_url": "https://api.github.com/users/wanghao16020510036/events{/privacy}",
"received_events_url": "https://api.github.com/users/wanghao16020510036/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-08T08:28:08 | 2025-09-08T11:36:15 | 2025-09-08T11:36:15 | NONE | null | null | null | null | ### System Info
ubuntu24.04
vscode Jupyter extension
### Who can help?
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline, set_seed
generator = pipeline("text-generation", model="gpt2")
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
When I run the above demo script, the length of output sentences exceeds 1000, far greater than max_length.
### Expected behavior
The length of output sentences should not significantly exceed max_length . | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40751/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40750/comments | https://api.github.com/repos/huggingface/transformers/issues/40750/events | https://github.com/huggingface/transformers/issues/40750 | 3,392,620,751 | I_kwDOCUB6oc7KN0jP | 40,750 | Trainer.save_model saves config with FSDP-wrapped architectures instead of base model | {
"login": "JdRion",
"id": 113088158,
"node_id": "U_kgDOBr2Wng",
"avatar_url": "https://avatars.githubusercontent.com/u/113088158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JdRion",
"html_url": "https://github.com/JdRion",
"followers_url": "https://api.github.com/users/JdRion/followers",
"following_url": "https://api.github.com/users/JdRion/following{/other_user}",
"gists_url": "https://api.github.com/users/JdRion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JdRion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JdRion/subscriptions",
"organizations_url": "https://api.github.com/users/JdRion/orgs",
"repos_url": "https://api.github.com/users/JdRion/repos",
"events_url": "https://api.github.com/users/JdRion/events{/privacy}",
"received_events_url": "https://api.github.com/users/JdRion/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-09-08T05:07:45 | 2025-10-16T08:02:48 | 2025-10-16T08:02:48 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.56.0
- Platform: Linux-5.15.92-2.el8.navix.ncc.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.2
- Accelerate version: 1.10.1
- Accelerate config:
```
distributed_type: FSDP
mixed_precision: bf16
fsdp_config:
fsdp_version: 2
fsdp_sharding_strategy: FULL_SHARD
fsdp_activation_checkpointing: True
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: Gemma3DecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
parallelism_config:
parallelism_config_cp_size: 1
parallelism_config_dp_replicate_size: 1
parallelism_config_dp_shard_size: 96
parallelism_config_tp_size: 1
```
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?:
```
pc = ParallelismConfig()` ,
...
training_args = GRPOConfig(parallelism_config=pc, ...)
accelerate launch \
--num_processes ${NUM_PROCESSES} \
--num_machines ${NUM_MACHINES} \
--machine_rank ${MACHINE_RANK} \
--main_process_ip ${MASTER_ADDR} \
--main_process_port ${MASTER_PORT} \
--config_file fsdp.yaml \
dapo.py --config mg --max-completion-len 8192
```
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
- trainer: @zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When saving a model trained with ParallelismConfig(FSDP), the saved config.json includes an architectures entry with the FSDP-wrapped class name (e.g., "FSDPGemma3ForCausalLM") instead of the original base model ("Gemma3ForCausalLM").
Custom Trainer (to avoid OOM on save)
To avoid OOM when saving in FSDP + ND-parallel mode, I overrode save_model in my custom trainer:
```
class SafeGRPOTrainer(GRPOTrainer):
def save_model(
self, output_dir: Optional[str] = None, _internal_call: bool = False
):
if output_dir is None:
output_dir = self.args.output_dir
out_dir = Path(output_dir)
out_dir.mkdir(parents=True, exist_ok=True)
# N-D parallelism branch
if getattr(self.accelerator, "parallelism_config", None) is not None:
if self.accelerator.should_save_model:
# Added this line to avoid OOM:
state_dict = self.accelerator.get_state_dict(self.model)
self._save(output_dir, state_dict=state_dict)
elif self.is_fsdp_enabled:
if (
"FULL_STATE_DICT"
in str(self.accelerator.state.fsdp_plugin.state_dict_type)
) and (version.parse(accelerate_version) > version.parse("0.24.1")):
state_dict = self.accelerator.get_state_dict(self.model)
if self.args.should_save:
self._save(output_dir, state_dict=state_dict)
elif self.args.should_save:
self._save(output_dir)
if self.args.push_to_hub and not _internal_call:
self.push_to_hub(
commit_message="Model save", revision=self.args.hub_revision
)
```
- Code Example
```
import torch
from accelerate.utils import ParallelismConfig
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import GRPOConfig
from util.safe_trainer import SafeGRPOTrainer # <-- custom override with get_state_dict
def main():
# Parallelism config (trigger N-D parallelism branch)
pc = ParallelismConfig()
model_name = "google/medgemma-27b-text-it"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Minimal training args
training_args = GRPOConfig(
parallelism_config=pc,
output_dir="./ckpt_test",
per_device_train_batch_size=1,
num_train_epochs=1,
save_strategy="steps",
save_steps=1,
save_total_limit=1,
logging_steps=1,
report_to="none",
)
# Init trainer
trainer = SafeGRPOTrainer(
model=model,
processing_class=tokenizer,
args=training_args,
train_dataset=train_ds,
reward_funcs=[...], # not needed for repro
)
# Only run save to trigger the bug
trainer.save_model("./ckpt_test")
if __name__ == "__main__":
main()
```
### Expected behavior
- Actual result (config.json)
```
{
"architectures": [
"FSDPGemma3ForCausalLM"
],
...
}
```
- Expected result (config.json)
```
{
"architectures": [
"Gemma3ForCausalLM"
],
...
}
``` | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40750/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/40749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40749/comments | https://api.github.com/repos/huggingface/transformers/issues/40749/events | https://github.com/huggingface/transformers/pull/40749 | 3,392,249,096 | PR_kwDOCUB6oc6nSEpi | 40,749 | Update JetMoe model card | {
"login": "Tolu-Oye",
"id": 113577566,
"node_id": "U_kgDOBsUOXg",
"avatar_url": "https://avatars.githubusercontent.com/u/113577566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tolu-Oye",
"html_url": "https://github.com/Tolu-Oye",
"followers_url": "https://api.github.com/users/Tolu-Oye/followers",
"following_url": "https://api.github.com/users/Tolu-Oye/following{/other_user}",
"gists_url": "https://api.github.com/users/Tolu-Oye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tolu-Oye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tolu-Oye/subscriptions",
"organizations_url": "https://api.github.com/users/Tolu-Oye/orgs",
"repos_url": "https://api.github.com/users/Tolu-Oye/repos",
"events_url": "https://api.github.com/users/Tolu-Oye/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tolu-Oye/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-08T01:18:20 | 2025-09-18T17:18:57 | 2025-09-18T17:18:56 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40749",
"html_url": "https://github.com/huggingface/transformers/pull/40749",
"diff_url": "https://github.com/huggingface/transformers/pull/40749.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40749.patch",
"merged_at": null
} | # What does this PR do?
This PR replaces the JetMoe model card with a new model card matching the format introduced in [#36979](https://github.com/huggingface/transformers/issues/36979).
Note: The attention mask image link is currently blank because the image has not been uploaded yet. I will update it once the dataset PR is merged.
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40749/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40748/comments | https://api.github.com/repos/huggingface/transformers/issues/40748/events | https://github.com/huggingface/transformers/pull/40748 | 3,392,096,474 | PR_kwDOCUB6oc6nRl3j | 40,748 | Remove unnecessary tildes from documentation | {
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-07T23:00:50 | 2025-09-08T15:56:35 | 2025-09-08T15:56:35 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40748",
"html_url": "https://github.com/huggingface/transformers/pull/40748",
"diff_url": "https://github.com/huggingface/transformers/pull/40748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40748.patch",
"merged_at": "2025-09-08T15:56:35"
} | The [stable documentation (v4.56.1)](https://huggingface.co/docs/transformers/v4.56.1/en/quicktour#pipeline) contains unnecessary tildes in the `infer_device()` method link.
<img width="945" height="554" alt="image" src="https://github.com/user-attachments/assets/6db44575-f965-496d-8b4b-8e7c90a0c3e6" />
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40748/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40747/comments | https://api.github.com/repos/huggingface/transformers/issues/40747/events | https://github.com/huggingface/transformers/pull/40747 | 3,391,845,836 | PR_kwDOCUB6oc6nQ4MT | 40,747 | Fix: Proper loss aggregation in Trainer with token-aware reduction | {
"login": "moonrunnerkc",
"id": 125813226,
"node_id": "U_kgDOB3_B6g",
"avatar_url": "https://avatars.githubusercontent.com/u/125813226?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moonrunnerkc",
"html_url": "https://github.com/moonrunnerkc",
"followers_url": "https://api.github.com/users/moonrunnerkc/followers",
"following_url": "https://api.github.com/users/moonrunnerkc/following{/other_user}",
"gists_url": "https://api.github.com/users/moonrunnerkc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moonrunnerkc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moonrunnerkc/subscriptions",
"organizations_url": "https://api.github.com/users/moonrunnerkc/orgs",
"repos_url": "https://api.github.com/users/moonrunnerkc/repos",
"events_url": "https://api.github.com/users/moonrunnerkc/events{/privacy}",
"received_events_url": "https://api.github.com/users/moonrunnerkc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-07T17:42:44 | 2025-09-08T11:31:10 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40747",
"html_url": "https://github.com/huggingface/transformers/pull/40747",
"diff_url": "https://github.com/huggingface/transformers/pull/40747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40747.patch",
"merged_at": null
} | This PR fixes incorrect loss normalization in the Trainer when running on multiple GPUs.
The previous implementation always averaged losses, which under-reported values in token-level training.
The new implementation provides a clean, token-aware reduction method that works consistently across single and multi-GPU setups.
Fixes #37474
Motivation and Context:
When using multiple GPUs, Trainer.training_step reported losses that were too small because the reduction was always done by mean().
This PR introduces _reduce_loss, a dedicated helper method that properly handles:
Single GPU: returns loss unchanged
Multi-GPU without token counts: averages across devices
Multi-GPU with token counts: sums and divides by the actual number of tokens
This ensures loss reporting and optimization are accurate, matching expected values like log(vocab_size) during early training.
What was changed:
Added _reduce_loss method inside the Trainer class.
Updated training_step to use _reduce_loss instead of hard-coded loss.mean().
Added a new test suite tests/trainer/test_loss_reduction.py covering single/multi-GPU scenarios, token-aware averaging, gradient preservation, and edge cases.
Added a minimal regression test in tests/test_trainer.py.
Tests:
✅ New tests added (8 total cases) and all pass locally.
✅ All existing tests continue to pass (excluding documented skips for distributed tests).
✅ No regressions introduced.
✅ Code imports and runs without errors.
Notes:
The implementation is backward compatible with existing code.
The design is clean, maintainable, and aligned with existing codebase patterns.
Maintainers may wish to further integrate this with annotations or future loss utilities, but this fix addresses the immediate normalization bug. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40747/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40746/comments | https://api.github.com/repos/huggingface/transformers/issues/40746/events | https://github.com/huggingface/transformers/pull/40746 | 3,391,733,297 | PR_kwDOCUB6oc6nQlpT | 40,746 | Set accepts_loss_kwargs to False for ConvNext(|V2)ForImageClassification | {
"login": "clinty",
"id": 223406,
"node_id": "MDQ6VXNlcjIyMzQwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/223406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clinty",
"html_url": "https://github.com/clinty",
"followers_url": "https://api.github.com/users/clinty/followers",
"following_url": "https://api.github.com/users/clinty/following{/other_user}",
"gists_url": "https://api.github.com/users/clinty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clinty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clinty/subscriptions",
"organizations_url": "https://api.github.com/users/clinty/orgs",
"repos_url": "https://api.github.com/users/clinty/repos",
"events_url": "https://api.github.com/users/clinty/events{/privacy}",
"received_events_url": "https://api.github.com/users/clinty/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-07T15:36:54 | 2025-09-08T13:32:49 | 2025-09-08T12:25:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40746",
"html_url": "https://github.com/huggingface/transformers/pull/40746",
"diff_url": "https://github.com/huggingface/transformers/pull/40746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40746.patch",
"merged_at": "2025-09-08T12:25:43"
} | The forward methods of these classes accept `kwargs`, which leads `Trainer`'s `compute_loss()` to incorrectly assume that they can handle `num_items_per_batch`, leading to training failure. This sets `accepts_loss_kwargs` to false explicitly to mitigate that assumption.
cc: @muellerzr @SunMarc | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40746/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40745/comments | https://api.github.com/repos/huggingface/transformers/issues/40745/events | https://github.com/huggingface/transformers/pull/40745 | 3,391,641,756 | PR_kwDOCUB6oc6nQVMV | 40,745 | Fix dotted model names | {
"login": "August-murr",
"id": 145011209,
"node_id": "U_kgDOCKSyCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-murr",
"html_url": "https://github.com/August-murr",
"followers_url": "https://api.github.com/users/August-murr/followers",
"following_url": "https://api.github.com/users/August-murr/following{/other_user}",
"gists_url": "https://api.github.com/users/August-murr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/August-murr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/August-murr/subscriptions",
"organizations_url": "https://api.github.com/users/August-murr/orgs",
"repos_url": "https://api.github.com/users/August-murr/repos",
"events_url": "https://api.github.com/users/August-murr/events{/privacy}",
"received_events_url": "https://api.github.com/users/August-murr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-07T13:47:09 | 2025-09-10T14:34:56 | 2025-09-10T14:34:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40745",
"html_url": "https://github.com/huggingface/transformers/pull/40745",
"diff_url": "https://github.com/huggingface/transformers/pull/40745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40745.patch",
"merged_at": "2025-09-10T14:34:56"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #40496
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40745/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40744/comments | https://api.github.com/repos/huggingface/transformers/issues/40744/events | https://github.com/huggingface/transformers/pull/40744 | 3,391,576,214 | PR_kwDOCUB6oc6nQJb7 | 40,744 | removed gemma3 eager warning | {
"login": "August-murr",
"id": 145011209,
"node_id": "U_kgDOCKSyCQ",
"avatar_url": "https://avatars.githubusercontent.com/u/145011209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/August-murr",
"html_url": "https://github.com/August-murr",
"followers_url": "https://api.github.com/users/August-murr/followers",
"following_url": "https://api.github.com/users/August-murr/following{/other_user}",
"gists_url": "https://api.github.com/users/August-murr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/August-murr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/August-murr/subscriptions",
"organizations_url": "https://api.github.com/users/August-murr/orgs",
"repos_url": "https://api.github.com/users/August-murr/repos",
"events_url": "https://api.github.com/users/August-murr/events{/privacy}",
"received_events_url": "https://api.github.com/users/August-murr/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-07T12:26:33 | 2025-09-08T12:54:49 | 2025-09-08T12:41:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40744",
"html_url": "https://github.com/huggingface/transformers/pull/40744",
"diff_url": "https://github.com/huggingface/transformers/pull/40744.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40744.patch",
"merged_at": "2025-09-08T12:41:52"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #40723
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40744/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/40743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40743/comments | https://api.github.com/repos/huggingface/transformers/issues/40743/events | https://github.com/huggingface/transformers/issues/40743 | 3,391,311,751 | I_kwDOCUB6oc7KI0-H | 40,743 | Support for 4D attention mask for T5 | {
"login": "Aethor",
"id": 16176966,
"node_id": "MDQ6VXNlcjE2MTc2OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/16176966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aethor",
"html_url": "https://github.com/Aethor",
"followers_url": "https://api.github.com/users/Aethor/followers",
"following_url": "https://api.github.com/users/Aethor/following{/other_user}",
"gists_url": "https://api.github.com/users/Aethor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aethor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aethor/subscriptions",
"organizations_url": "https://api.github.com/users/Aethor/orgs",
"repos_url": "https://api.github.com/users/Aethor/repos",
"events_url": "https://api.github.com/users/Aethor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aethor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-09-07T07:18:05 | 2025-09-09T11:43:33 | null | NONE | null | null | null | null | ### Feature request
Currently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :
```python
import torch
from transformers import AutoTokenizer, T5ForConditionalGeneration
tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-small")
model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small")
input_ids = tokenizer("Where is", return_tensors="pt").input_ids
decoder_input_ids = tokenizer("<pad>", return_tensors="pt").input_ids
batch_size, seq_len = input_ids.shape
tgt_len = decoder_input_ids.shape[1]
num_heads = model.config.num_heads
attention_mask = torch.ones(batch_size, num_heads, seq_len, seq_len)
decoder_attention_mask = torch.ones(batch_size, num_heads, tgt_len, tgt_len).tril(0)
model(
input_ids,
decoder_input_ids=decoder_input_ids,
attention_mask=attention_mask,
decoder_attention_mask=decoder_attention_mask,
)
```
One of the problems in the current code is in the handling of the cross-attention mask. Currently, it is created using the 1D encoder attention mask when supplied. However, in the case of a 4D mask, it seems unclear how to correctly use the encoder mask: therefore, the best solution might be to introduce a new 4D mask argument `cross_attention_mask` of shape (batch_size, num_heads, tgt_len, seq_len)`. This lets the user controls all attention masks if necessary.
### Motivation
4D masks are useful for many purposes, as outlined by #27539 and [this blog post](https://huggingface.co/blog/poedator/4d-masks), but not all models support them.
### Your contribution
I propose to fix the code to handle 4D attention masks, and to add a new `cross_attention_mask` argument to add the possibility to control the cross attention mask manually. I wrote a version of that code in [this fork](https://github.com/Aethor/transformers/tree/t5-4d-attention-mask).
I'm happy to create a PR with my code, but:
1. This is my first transformers contribution, I need help with some things such as handling the "Copy" code duplication mechanism of transformers. Should other similar models with copied functions from T5 be changed as well?
2. Although I wrote a [first test with trivial masks](https://github.com/Aethor/transformers/blob/22dc62edbdbc3f2afeb90a31c75047711c1afc5c/tests/models/t5/test_modeling_t5.py#L1876), I am not entirely sure how to test this
3. I want to be sure that adding the new `cross_attention` mask parameter is the right way to do this and will be approved | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40743/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/40742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40742/comments | https://api.github.com/repos/huggingface/transformers/issues/40742/events | https://github.com/huggingface/transformers/pull/40742 | 3,391,262,764 | PR_kwDOCUB6oc6nPSqB | 40,742 | Assume torch in certain files | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-09-07T06:16:08 | 2025-10-17T13:21:21 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40742",
"html_url": "https://github.com/huggingface/transformers/pull/40742",
"diff_url": "https://github.com/huggingface/transformers/pull/40742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40742.patch",
"merged_at": null
} | # What does this PR do?
Always import torch in some source files that are PT only. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40742/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/40741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/40741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/40741/comments | https://api.github.com/repos/huggingface/transformers/issues/40741/events | https://github.com/huggingface/transformers/pull/40741 | 3,391,243,741 | PR_kwDOCUB6oc6nPO8B | 40,741 | Fix np array typing | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-09-07T05:46:04 | 2025-09-08T11:37:20 | 2025-09-08T11:30:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/40741",
"html_url": "https://github.com/huggingface/transformers/pull/40741",
"diff_url": "https://github.com/huggingface/transformers/pull/40741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/40741.patch",
"merged_at": "2025-09-08T11:30:41"
} | # What does this PR do?
Most fixes are using `np.ndarray`. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/40741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/40741/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.