url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
dict | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool | is_closed
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/41242
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41242/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41242/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41242/events
|
https://github.com/huggingface/transformers/pull/41242
| 3,471,598,613
|
PR_kwDOCUB6oc6rc7YO
| 41,242
|
[torchao] Add regex support for ModuleFqnToConfig
|
{
"login": "jerryzh168",
"id": 4958441,
"node_id": "MDQ6VXNlcjQ5NTg0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4958441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryzh168",
"html_url": "https://github.com/jerryzh168",
"followers_url": "https://api.github.com/users/jerryzh168/followers",
"following_url": "https://api.github.com/users/jerryzh168/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryzh168/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryzh168/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryzh168/subscriptions",
"organizations_url": "https://api.github.com/users/jerryzh168/orgs",
"repos_url": "https://api.github.com/users/jerryzh168/repos",
"events_url": "https://api.github.com/users/jerryzh168/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryzh168/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-01T01:06:18
| 2025-10-08T11:05:16
| 2025-10-08T11:05:16
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41242",
"html_url": "https://github.com/huggingface/transformers/pull/41242",
"diff_url": "https://github.com/huggingface/transformers/pull/41242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41242.patch",
"merged_at": "2025-10-08T11:05:15"
}
|
Summary:
Similar to https://github.com/pytorch/ao/pull/3084 we added regex support in transformers so people can use regex to quantize the models.
See https://github.com/pytorch/ao/pull/3084 for docs and precedence of different configurations
Uploaded model: https://huggingface.co/torchao-testing/opt-125m-ModuleFqnToConfig-v1-regex-0.14.0.dev
Test Plan:
pytest tests/quantization/torchao_integration/test_torchao.py -k test_module_fqn_to_config_regex
Reviewers:
Subscribers:
Tasks:
Tags:
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41242/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41241
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41241/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41241/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41241/events
|
https://github.com/huggingface/transformers/pull/41241
| 3,471,581,981
|
PR_kwDOCUB6oc6rc3ur
| 41,241
|
Use math.log2
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-01T00:58:58
| 2025-10-01T09:53:11
| 2025-10-01T09:52:32
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41241",
"html_url": "https://github.com/huggingface/transformers/pull/41241",
"diff_url": "https://github.com/huggingface/transformers/pull/41241.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41241.patch",
"merged_at": "2025-10-01T09:52:32"
}
|
# What does this PR do?
This PR changes `math.log(XXX, 2)` to `math.log2(XXX)`, to make the invocation more clear.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41241/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41240
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41240/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41240/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41240/events
|
https://github.com/huggingface/transformers/pull/41240
| 3,471,564,078
|
PR_kwDOCUB6oc6rczyB
| 41,240
|
Use removeprefix and removesuffix
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-01T00:49:26
| 2025-10-01T13:43:58
| 2025-10-01T13:13:05
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41240",
"html_url": "https://github.com/huggingface/transformers/pull/41240",
"diff_url": "https://github.com/huggingface/transformers/pull/41240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41240.patch",
"merged_at": "2025-10-01T13:13:05"
}
|
# What does this PR do?
To simplify code
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41240/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41239
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41239/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41239/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41239/events
|
https://github.com/huggingface/transformers/pull/41239
| 3,471,138,656
|
PR_kwDOCUB6oc6rbWR6
| 41,239
|
Add num_hidden_layers to t5gemma's top level config
|
{
"login": "winnie0617",
"id": 61926894,
"node_id": "MDQ6VXNlcjYxOTI2ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/61926894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winnie0617",
"html_url": "https://github.com/winnie0617",
"followers_url": "https://api.github.com/users/winnie0617/followers",
"following_url": "https://api.github.com/users/winnie0617/following{/other_user}",
"gists_url": "https://api.github.com/users/winnie0617/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winnie0617/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winnie0617/subscriptions",
"organizations_url": "https://api.github.com/users/winnie0617/orgs",
"repos_url": "https://api.github.com/users/winnie0617/repos",
"events_url": "https://api.github.com/users/winnie0617/events{/privacy}",
"received_events_url": "https://api.github.com/users/winnie0617/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-30T21:34:42
| 2025-10-01T13:29:52
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41239",
"html_url": "https://github.com/huggingface/transformers/pull/41239",
"diff_url": "https://github.com/huggingface/transformers/pull/41239.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41239.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
This PR fixes a bug that for generating with cache using T5Gemma. Due to the nested structure of its config class, the field `num_hidden_layers`, which is used to initialize a dynamic cache, only exists under `config.decoder`.
This PR added a one line change to initialize `config.num_hidden_layers` using `config.decoder.num_hidden_layers`.
<!-- Remove if not applicable -->
Fixes #41073
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante @Rocketknight1
Following up on this issue since I encountered the same error. Happy to provide more information or discuss alternative solutions to help get this merged :)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @zach-huggingface @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker @S1ro1
- CIs: @ydshieh
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41239/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41238
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41238/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41238/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41238/events
|
https://github.com/huggingface/transformers/issues/41238
| 3,470,770,407
|
I_kwDOCUB6oc7O38Dn
| 41,238
|
[Bug] RuntimeError: dtype mismatch in _group_beam_search with bfloat16/fp16 models
|
{
"login": "anmorgunov",
"id": 45741336,
"node_id": "MDQ6VXNlcjQ1NzQxMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/45741336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmorgunov",
"html_url": "https://github.com/anmorgunov",
"followers_url": "https://api.github.com/users/anmorgunov/followers",
"following_url": "https://api.github.com/users/anmorgunov/following{/other_user}",
"gists_url": "https://api.github.com/users/anmorgunov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmorgunov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmorgunov/subscriptions",
"organizations_url": "https://api.github.com/users/anmorgunov/orgs",
"repos_url": "https://api.github.com/users/anmorgunov/repos",
"events_url": "https://api.github.com/users/anmorgunov/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmorgunov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T19:16:42
| 2025-10-01T14:26:52
| 2025-10-01T14:26:52
|
NONE
| null | null | null | null |
### System Info
python 3.12, transformers >4.45
### Who can help?
@Cyrilvallez @gante @zucchini-nlp
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
a minimal reproduction repository has been created here: [anmorgunov/huggingface-group-beam-search-bug](https://github.com/anmorgunov/huggingface-group-beam-search-bug)
the issue was introduced in `v4.45.0` and persists. to reproduce, run the `bfloat-gen.py` script from the repository.
1. **this will FAIL**
```bash
uv run --with transformers==4.45.0 --with torch --with accelerate bfloat-gen.py
```
2. **this will WORK**
```bash
uv run --with transformers==4.44.2 --with torch --with accelerate bfloat-gen.py
```
the crash reliably occurs when using group beam search (`num_beam_groups > 1`) with `output_scores=True` on a model loaded in a half-precision format like `bfloat16`.
### Expected behavior
the generation process should complete successfully without raising a `RuntimeError`. the code should handle the internal upcasting and downcasting of dtypes correctly, returning the generated sequences and scores.
it shouldn't crash.
## The Root Cause
The underlying issue is a `dtype` mismatch during an in-place tensor assignment.
1. the generation code starting from v4.45.0 upcasts model logits from `bfloat16` to `float32` before a `log_softmax` operation.
2. this `float32` `dtype` propagates through the score calculation.
3. the code then attempts to assign these `float32` scores back into a `bfloat16` tensor (`processed_score`) without downcasting first.
4. this triggers the `RuntimeError: Index put requires the source and destination types match...` because pytorch's `index_put_` kernel does not perform implicit casting.
### Evolution of code
the bug was unintentionally introduced in v4.45.0 and can be traced through the following changes:
- v4.44.2 (and earlier): No upcasting.
the logits were used in their native dtype.
```py
# file: src/transformers/generation/utils.py
# line 3742
next_token_logits = outputs.logits[batch_group_indices, -1, :]
```
- v4.45.0: Upcasting is introduced.
.float() is added to the logit calculation for precision, creating the float32 tensor that eventually causes the conflict.
```py
# file: src/transformers/generation/utils.py
# line 3762
# .float() is needed to retain precision for later logits manipulations
next_token_logits = outputs.logits[:, -1, :].clone().float()
```
- v4.51.0: Code is refactored.
the logic is cleaned up from .float() to a more explicit .to() call, but the intentional upcasting to float32 remains.
```py
(3673) next_token_logits = outputs.logits[batch_group_indices, -1, :].to(
dtype=torch.float32, device=input_ids.device
)
```
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41238/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41237
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41237/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41237/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41237/events
|
https://github.com/huggingface/transformers/pull/41237
| 3,470,687,097
|
PR_kwDOCUB6oc6rZzN8
| 41,237
|
Fix binding of video frames to video placeholder in `InternVL` model
|
{
"login": "daskol",
"id": 9336514,
"node_id": "MDQ6VXNlcjkzMzY1MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daskol",
"html_url": "https://github.com/daskol",
"followers_url": "https://api.github.com/users/daskol/followers",
"following_url": "https://api.github.com/users/daskol/following{/other_user}",
"gists_url": "https://api.github.com/users/daskol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daskol/subscriptions",
"organizations_url": "https://api.github.com/users/daskol/orgs",
"repos_url": "https://api.github.com/users/daskol/repos",
"events_url": "https://api.github.com/users/daskol/events{/privacy}",
"received_events_url": "https://api.github.com/users/daskol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 6886428489,
"node_id": "LA_kwDOCUB6oc8AAAABmnaPSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/run-slow",
"name": "run-slow",
"color": "E1D519",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T18:54:39
| 2025-10-02T13:19:41
| 2025-10-02T09:43:35
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41237",
"html_url": "https://github.com/huggingface/transformers/pull/41237",
"diff_url": "https://github.com/huggingface/transformers/pull/41237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41237.patch",
"merged_at": "2025-10-02T09:43:35"
}
|
This PR fixes broken frame patch indexing in `InternVLProcessor`. If batch is non-trivial (more than one sample) and contains videos and video placeholder in text then the processor returns `pixel_values` of wrong shape. Specifically, it returns `(total_num_frames - 1, ...)` instead of `(total_num_frames, ...)`. The simple code snippet to reproduces follows.
```python
proc = AutoProcessor.from_pretrained('OpenGVLab/InternVL3_5-1B-HF')
texts = [
'<video>\nAre there any cyan objects that enter the scene?\nno',
'<video>\nAre there any red spheres that enter the scene?\nno',
]
frames = np.ones((4, 448, 448, 3), np.float32)
inputs = proc(text=texts,
videos=[frames, frames], return_tensors='pt',
videos_kwargs={'size': (448, 448)})
print(inputs.pixel_values.shape) # (7, 3, 448, 448)
assert inputs.pixel_values.shape[0] == 2 * frames.shape[0] # FAIL: 7 != 8
```
The original lines are authored by @yonigozlan. Could you take a look?
FYI @Rocketknight1 @zucchini-nlp
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41237/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41236
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41236/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41236/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41236/events
|
https://github.com/huggingface/transformers/pull/41236
| 3,470,611,529
|
PR_kwDOCUB6oc6rZigb
| 41,236
|
fix TrainerIntegrationDeepSpeed UT failures
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T18:32:35
| 2025-10-01T15:30:23
| 2025-10-01T11:55:01
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41236",
"html_url": "https://github.com/huggingface/transformers/pull/41236",
"diff_url": "https://github.com/huggingface/transformers/pull/41236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41236.patch",
"merged_at": "2025-10-01T11:55:01"
}
|
when run `TrainerIntegrationDeepSpeed` cases like `pytest -rA tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_can_resume_training_normal_zero2_bf16_ds_optim_ds_scheduler`, it will fail w/
> if "optimizer" in config:
> if args.adafactor:
> E AttributeError: 'RegressionTrainingArguments' object has no attribute 'adafactor'
>
> src/transformers/integrations/deepspeed.py:359: AttributeError
Fix it by using `args.optim`
@ydshieh, @SunMarc pls help review, thx very much.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41236/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41235
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41235/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41235/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41235/events
|
https://github.com/huggingface/transformers/issues/41235
| 3,470,322,961
|
I_kwDOCUB6oc7O2O0R
| 41,235
|
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
|
{
"login": "ldh127",
"id": 12208944,
"node_id": "MDQ6VXNlcjEyMjA4OTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12208944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ldh127",
"html_url": "https://github.com/ldh127",
"followers_url": "https://api.github.com/users/ldh127/followers",
"following_url": "https://api.github.com/users/ldh127/following{/other_user}",
"gists_url": "https://api.github.com/users/ldh127/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ldh127/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ldh127/subscriptions",
"organizations_url": "https://api.github.com/users/ldh127/orgs",
"repos_url": "https://api.github.com/users/ldh127/repos",
"events_url": "https://api.github.com/users/ldh127/events{/privacy}",
"received_events_url": "https://api.github.com/users/ldh127/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-30T17:07:07
| 2025-10-01T12:41:33
| null |
NONE
| null | null | null | null |
i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?
recover data state ,not only model state , i wish i said my request clearly .
how to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks
i wish you understand what i said
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41235/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41234
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41234/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41234/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41234/events
|
https://github.com/huggingface/transformers/pull/41234
| 3,470,260,384
|
PR_kwDOCUB6oc6rYUwv
| 41,234
|
[v5] Bump accelerate to 1.1.0
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T16:44:01
| 2025-10-07T15:18:35
| 2025-10-07T15:18:33
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41234",
"html_url": "https://github.com/huggingface/transformers/pull/41234",
"diff_url": "https://github.com/huggingface/transformers/pull/41234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41234.patch",
"merged_at": "2025-10-07T15:18:33"
}
|
# What does this PR do?
This PR bump min version of accelerate to 1.1.0. This should be safe as trl already bump it to 1.4.0.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41234/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41233
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41233/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41233/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41233/events
|
https://github.com/huggingface/transformers/pull/41233
| 3,470,089,987
|
PR_kwDOCUB6oc6rXvgf
| 41,233
|
Remove all instances of `is_safetensors_available`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T15:47:00
| 2025-10-01T13:57:29
| 2025-10-01T13:57:29
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41233",
"html_url": "https://github.com/huggingface/transformers/pull/41233",
"diff_url": "https://github.com/huggingface/transformers/pull/41233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41233.patch",
"merged_at": "2025-10-01T13:57:29"
}
|
# What does this PR do?
This PR removes all the checks for safetensors availability as safetensors became a core dep of transformers 2 years ago
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41233/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41233/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41232
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41232/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41232/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41232/events
|
https://github.com/huggingface/transformers/pull/41232
| 3,470,035,471
|
PR_kwDOCUB6oc6rXjmd
| 41,232
|
[kernels] Kernel Config
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T15:31:33
| 2025-10-07T11:58:22
| 2025-10-07T11:58:20
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41232",
"html_url": "https://github.com/huggingface/transformers/pull/41232",
"diff_url": "https://github.com/huggingface/transformers/pull/41232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41232.patch",
"merged_at": "2025-10-07T11:58:20"
}
|
# What does this PR do?
Adding a kernel config in `from_pretrained` to allow users to use custom kernels. This requires though that the layer is already registered to be used with kernels like `RMSNorm`. Other prs will follow to allow to register kernels without doing that in the modeling.
For `RMSNorm` it looks like this:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, KernelConfig
import torch
model_id = "meta-llama/Llama-3.2-1B"
kernel_mapping = {
"RMSNorm": "kernels-community/layer_norm:LlamaRMSNorm"
}
kernel_config = KernelConfig(kernel_mapping)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
use_kernels=True,
kernel_config=kernel_config
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompts = ["The capital of France is"]
token_dict = tokenizer(prompts, return_tensors="pt").to(model.device)
outputs = model.generate(**token_dict, max_new_tokens=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41232/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41231
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41231/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41231/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41231/events
|
https://github.com/huggingface/transformers/pull/41231
| 3,469,959,142
|
PR_kwDOCUB6oc6rXS-R
| 41,231
|
[repo utils] Update `models_to_deprecate.py`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T15:10:40
| 2025-10-01T13:13:40
| 2025-10-01T12:01:52
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41231",
"html_url": "https://github.com/huggingface/transformers/pull/41231",
"diff_url": "https://github.com/huggingface/transformers/pull/41231.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41231.patch",
"merged_at": "2025-10-01T12:01:52"
}
|
# What does this PR do?
Updates the script:
- Fixes hub usage bugs;
- Stop counting downloads after hitting thresholds, to spare the hub from requests;
- Better prints, so it doesn't seem like the script has hung.
- Add support for aliases and folder<>model_tag mapping
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41231/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41230
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41230/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41230/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41230/events
|
https://github.com/huggingface/transformers/issues/41230
| 3,469,897,793
|
I_kwDOCUB6oc7O0nBB
| 41,230
|
Consider forking and maintaining pyctcdecode or switch to torchaudio.models.decoder
|
{
"login": "FredHaa",
"id": 6875946,
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FredHaa",
"html_url": "https://github.com/FredHaa",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-30T14:54:52
| 2025-10-02T13:03:23
| null |
NONE
| null | null | null | null |
### System Info
transformers[torch-speech]==4.56.2
pyannote-audio==4.0.0
### Who can help?
@Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
With the release of the new pyannote-audio==4.0.0 a few problems arises due to the pyctcdecode dependency which seems to be abandoned.
pyannote-audio==4.0.0 depends on numpy>=2.0.0, but the latest pyctcdecode==0.5.0 (from January 2023) depends on numpy<2.0.0. A PR for numpy 2.0 support has been ignored since February (kensho-technologies/pyctcdecode/pull/116). The restriction of numpy<2.0.0 is arbitrary and was set way before numpy 2.0.0 was announced.
Another problem arises because one of the last commits to pyctcdecode changes the output format of the decoder from tuple to a dataclass, making it incompatible with the current transformers ASR pipeline code..
I.e., currently the only way to use the new pyannote-audio==4.0.0 speaker diarization lib with a Wav2Vec2ProcessorWithLM is by forking pyctcdecode, reverting the main branch to the 0.5.0 state, and then removing the numpy<2.0.0 restriction.
```
user@host ~> uv pip install "transformers[torch-speech]==4.56.2" pyannote-audio==4.0.0 1
× No solution found when resolving dependencies:
╰─▶ Because transformers[torch-speech]==4.56.2 depends on pyctcdecode>=0.4.0 and pyctcdecode>=0.4.0 depends on numpy>=1.15.0,<2.0.0, we can conclude that transformers[torch-speech]==4.56.2 depends on numpy>=1.15.0,<2.0.0. (1)
Because pyannote-core==6.0.1 depends on numpy>=2.0 and only pyannote-core<=6.0.1 is available, we can conclude that pyannote-core>=6.0.1 depends on numpy>=2.0.
And because pyannote-audio==4.0.0 depends on pyannote-core>=6.0.1, we can conclude that pyannote-audio==4.0.0 depends on numpy>=2.0.
And because we know from (1) that transformers[torch-speech]==4.56.2 depends on numpy>=1.15.0,<2.0.0, we can conclude that pyannote-audio==4.0.0 and transformers[torch-speech]==4.56.2 are incompatible.
And because you require transformers[torch-speech]==4.56.2 and pyannote-audio==4.0.0, we can conclude that your requirements are unsatisfiable.
```
### Reproduction
uv venv
uv pip install "transformers[torch-speech]==4.56.2" pyannote-audio==4.0.0
### Expected behavior
The packages should install
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41230/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41229
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41229/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41229/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41229/events
|
https://github.com/huggingface/transformers/pull/41229
| 3,469,861,333
|
PR_kwDOCUB6oc6rW9bt
| 41,229
|
Fix multi-video timestamp bug in Qwen-3-VL and GLM4V
|
{
"login": "tim120526",
"id": 43242086,
"node_id": "MDQ6VXNlcjQzMjQyMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/43242086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tim120526",
"html_url": "https://github.com/tim120526",
"followers_url": "https://api.github.com/users/tim120526/followers",
"following_url": "https://api.github.com/users/tim120526/following{/other_user}",
"gists_url": "https://api.github.com/users/tim120526/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tim120526/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tim120526/subscriptions",
"organizations_url": "https://api.github.com/users/tim120526/orgs",
"repos_url": "https://api.github.com/users/tim120526/repos",
"events_url": "https://api.github.com/users/tim120526/events{/privacy}",
"received_events_url": "https://api.github.com/users/tim120526/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T14:46:18
| 2025-10-02T09:15:57
| 2025-10-02T09:15:57
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41229",
"html_url": "https://github.com/huggingface/transformers/pull/41229",
"diff_url": "https://github.com/huggingface/transformers/pull/41229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41229.patch",
"merged_at": "2025-10-02T09:15:57"
}
|
# What does this PR do?
This PR fixes a timestamp placeholder error that occurs when multiple videos are included in a single text input.
In `Qwen3-VLProcessor,` frames are sampled from videos, and the relative timestamp placeholder `<{curr_time:.1f} seconds>` is constructed for each frame to indicate its corresponding time information. Previously, the code used the index `i` to retrieve the video metadata (`metadata= video_metadata[i]` ), which only worked correctly for a single video input. When multiple videos were provided, the metadata of the first video was incorrectly used to generate timestamp placeholders for all the other videos, resulting in errors.
This PR replaces the index `i` with the correct video index `index` when retrieving metadata, ensuring that each video use the appropriate timestamp placeholders.
Hi [@zach-huggingface](https://github.com/zach-huggingface) [@SunMarc](https://github.com/SunMarc) [@zucchini-nlp](https://github.com/zucchini-nlp), I am just following up on this bug and the proposed fix. It is a one-line change, and I would be happy to assist with any questions or further clarifications to help get it merged.
Thank you very much!
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41229/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41228
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41228/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41228/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41228/events
|
https://github.com/huggingface/transformers/pull/41228
| 3,469,503,311
|
PR_kwDOCUB6oc6rVwVk
| 41,228
|
Fix sliding window attn mask
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null |
[] | 2025-09-30T13:33:35
| 2025-09-30T15:22:53
| 2025-09-30T15:22:53
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41228",
"html_url": "https://github.com/huggingface/transformers/pull/41228",
"diff_url": "https://github.com/huggingface/transformers/pull/41228.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41228.patch",
"merged_at": "2025-09-30T15:22:53"
}
|
It was reporedt in #41184 that the sliding window attention mask has errors in CB. This PR fixes this and adds a test for the `build_attention_mask` function, so it does not happen again.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41228/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41227
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41227/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41227/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41227/events
|
https://github.com/huggingface/transformers/pull/41227
| 3,469,477,927
|
PR_kwDOCUB6oc6rVrRa
| 41,227
|
Unify is_torchvision_v2_available with is_torchvision_available
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T13:29:10
| 2025-09-30T14:24:36
| 2025-09-30T14:21:49
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41227",
"html_url": "https://github.com/huggingface/transformers/pull/41227",
"diff_url": "https://github.com/huggingface/transformers/pull/41227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41227.patch",
"merged_at": "2025-09-30T14:21:49"
}
|
# What does this PR do?
The minimum recommended torchvision version for torch 2.2 is 0.17 .
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41227/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41226
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41226/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41226/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41226/events
|
https://github.com/huggingface/transformers/pull/41226
| 3,469,427,789
|
PR_kwDOCUB6oc6rVhBr
| 41,226
|
Remove old Python code
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T13:19:46
| 2025-09-30T14:16:27
| 2025-09-30T14:16:00
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41226",
"html_url": "https://github.com/huggingface/transformers/pull/41226",
"diff_url": "https://github.com/huggingface/transformers/pull/41226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41226.patch",
"merged_at": "2025-09-30T14:15:59"
}
|
# What does this PR do?
Remove code for python < 3.10
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41226/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41225
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41225/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41225/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41225/events
|
https://github.com/huggingface/transformers/pull/41225
| 3,469,424,312
|
PR_kwDOCUB6oc6rVgd3
| 41,225
|
[voxtral] language detection + skipping lang:xx
|
{
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T13:19:16
| 2025-10-10T09:18:31
| 2025-10-10T09:18:31
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41225",
"html_url": "https://github.com/huggingface/transformers/pull/41225",
"diff_url": "https://github.com/huggingface/transformers/pull/41225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41225.patch",
"merged_at": "2025-10-10T09:18:31"
}
|
# What does this PR do?
Adds the possibility of setting `language=None` in `apply_transcription_request` for automatic language detection.
> [!NOTE]
> This is **not** breaking. Instead of being required, `language` is now optional.
It was a bit hidden, but Voxtral supports language detection by omitting the language token, e.g.
```
- <s>[INST][BEGIN_AUDIO][AUDIO]...[AUDIO][/INST]lang:en[TRANSCRIBE]
+ <s>[INST][BEGIN_AUDIO][AUDIO]...[AUDIO][/INST][TRANSCRIBE]
```
see [here](https://github.com/mistralai/mistral-common/blob/f88454face1d2b4ae062ca53b7b0272cee6d4511/src/mistral_common/tokens/tokenizers/instruct.py#L955-L958):
```python
tokens = [*prefix, *tokens]
if request.language is not None:
language_string = f"lang:{request.language}" # no space.
tokens += self.tokenizer.encode(language_string, bos=False, eos=False)
```
## Other update
>[!IMPORTANT]
> 🚨 In the specific case of Voxtral, the added `f"lang:xx"` (always a two char language code since it follows ISO 639-1 alpha-2 format) is not considered as a special token by `mistral-common` and is encoded/ decoded as normal text.
Nevertheless we should remove it to ease users life.
Added:
- skipping logic in `MistralCommonTokenizer`'s `decode`
- associated `test_decode_transcription_mode`
|
{
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41225/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41224
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41224/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41224/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41224/events
|
https://github.com/huggingface/transformers/pull/41224
| 3,469,324,369
|
PR_kwDOCUB6oc6rVMnE
| 41,224
|
Add DINOv3ViTForImageClassification support
|
{
"login": "dimidagd",
"id": 46669905,
"node_id": "MDQ6VXNlcjQ2NjY5OTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46669905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimidagd",
"html_url": "https://github.com/dimidagd",
"followers_url": "https://api.github.com/users/dimidagd/followers",
"following_url": "https://api.github.com/users/dimidagd/following{/other_user}",
"gists_url": "https://api.github.com/users/dimidagd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimidagd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimidagd/subscriptions",
"organizations_url": "https://api.github.com/users/dimidagd/orgs",
"repos_url": "https://api.github.com/users/dimidagd/repos",
"events_url": "https://api.github.com/users/dimidagd/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimidagd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-30T13:00:02
| 2025-10-27T14:46:00
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41224",
"html_url": "https://github.com/huggingface/transformers/pull/41224",
"diff_url": "https://github.com/huggingface/transformers/pull/41224.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41224.patch",
"merged_at": null
}
|
@yonigozlan @molbap
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41224/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41223
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41223/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41223/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41223/events
|
https://github.com/huggingface/transformers/pull/41223
| 3,469,317,217
|
PR_kwDOCUB6oc6rVLS7
| 41,223
|
🚨 [v5] remove deprecated `generate` classes (constraints and beam scorers)
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T12:58:42
| 2025-10-02T11:11:14
| 2025-10-02T11:11:11
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41223",
"html_url": "https://github.com/huggingface/transformers/pull/41223",
"diff_url": "https://github.com/huggingface/transformers/pull/41223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41223.patch",
"merged_at": "2025-10-02T11:11:11"
}
|
# What does this PR do?
The decoding methods using these classes were recently moved to the hub, and these classes were deprecated. They were scheduled to be removed in minor versions that won't exist, so they get tagged as v5 removals :)
(cc @manueldeprada )
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41223/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41223/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41222
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41222/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41222/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41222/events
|
https://github.com/huggingface/transformers/pull/41222
| 3,469,312,199
|
PR_kwDOCUB6oc6rVKbR
| 41,222
|
Fix pylint warnings
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T12:57:48
| 2025-10-02T14:31:59
| 2025-10-01T13:16:22
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41222",
"html_url": "https://github.com/huggingface/transformers/pull/41222",
"diff_url": "https://github.com/huggingface/transformers/pull/41222.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41222.patch",
"merged_at": "2025-10-01T13:16:22"
}
|
# What does this PR do?
More fixes
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41222/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41221
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41221/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41221/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41221/events
|
https://github.com/huggingface/transformers/pull/41221
| 3,469,159,344
|
PR_kwDOCUB6oc6rUs7_
| 41,221
|
update code owners
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T12:25:30
| 2025-09-30T14:21:21
| 2025-09-30T14:21:19
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41221",
"html_url": "https://github.com/huggingface/transformers/pull/41221",
"diff_url": "https://github.com/huggingface/transformers/pull/41221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41221.patch",
"merged_at": "2025-09-30T14:21:19"
}
|
# What does this PR do?
Have to do this now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41221/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41220
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41220/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41220/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41220/events
|
https://github.com/huggingface/transformers/pull/41220
| 3,469,138,138
|
PR_kwDOCUB6oc6rUoYF
| 41,220
|
Align pull request template to bug report template
|
{
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T12:20:43
| 2025-09-30T12:29:37
| 2025-09-30T12:25:41
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41220",
"html_url": "https://github.com/huggingface/transformers/pull/41220",
"diff_url": "https://github.com/huggingface/transformers/pull/41220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41220.patch",
"merged_at": "2025-09-30T12:25:41"
}
|
# What does this PR do?
Align pull request template to bug report template: https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml?plain=1
Follow-up of #40881 by @ArthurZucker
The only difference is that I don't refer users to https://discuss.huggingface.co/ for hub issues, as the PR template isn't for issues.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @LysandreJik
- Tom Aarsen
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41220/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41219
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41219/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41219/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41219/events
|
https://github.com/huggingface/transformers/pull/41219
| 3,469,093,044
|
PR_kwDOCUB6oc6rUee4
| 41,219
|
Add missing ModelOutput subclass return type hints
|
{
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-30T12:08:35
| 2025-09-30T12:33:26
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41219",
"html_url": "https://github.com/huggingface/transformers/pull/41219",
"diff_url": "https://github.com/huggingface/transformers/pull/41219.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41219.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds missing ModelOutput subclass return type hints. I'm running an analysis on the Model classes to see what kind of outputs are used for the various modalities, and I noticed a handful of architectures were missing them.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
- Tom Aarsen
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41219/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41218
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41218/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41218/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41218/events
|
https://github.com/huggingface/transformers/pull/41218
| 3,469,051,411
|
PR_kwDOCUB6oc6rUVaN
| 41,218
|
Video processor accepts single frames on cuda
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T11:57:16
| 2025-10-01T08:55:11
| 2025-10-01T08:55:11
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41218",
"html_url": "https://github.com/huggingface/transformers/pull/41218",
"diff_url": "https://github.com/huggingface/transformers/pull/41218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41218.patch",
"merged_at": "2025-10-01T08:55:11"
}
|
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/40867 and allows input videos to be a "3D tensor on cuda device"
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41218/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41217
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41217/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41217/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41217/events
|
https://github.com/huggingface/transformers/pull/41217
| 3,469,016,040
|
PR_kwDOCUB6oc6rUNd4
| 41,217
|
[`FA3`] Fix masking and loading logic in same process
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T11:48:41
| 2025-10-01T14:36:17
| 2025-10-01T14:36:13
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41217",
"html_url": "https://github.com/huggingface/transformers/pull/41217",
"diff_url": "https://github.com/huggingface/transformers/pull/41217.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41217.patch",
"merged_at": "2025-10-01T14:36:13"
}
|
There are 2 issues that are addressed
1. Fa3 did not use any masking
- Now registered correctly
- Tested indirectly in a rewritten parity check between fa2 and fa3 that already exists
2. Loading logic for anything fa related - kernels, fa2, fa3 - did not work in the same process: Only the first loaded fa variation will be loaded while now it will be the latest.
- Only forced to reload if an attention type is set to anything fa.
Small benchmark in the parity test also now has a 1.1x speedup (A100) instead of being around 1x (no improvement).
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41217/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41216
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41216/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41216/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41216/events
|
https://github.com/huggingface/transformers/pull/41216
| 3,468,860,628
|
PR_kwDOCUB6oc6rTrES
| 41,216
|
[generate] cache missing custom generate file
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T11:10:32
| 2025-09-30T13:59:42
| 2025-09-30T13:32:24
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41216",
"html_url": "https://github.com/huggingface/transformers/pull/41216",
"diff_url": "https://github.com/huggingface/transformers/pull/41216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41216.patch",
"merged_at": "2025-09-30T13:32:24"
}
|
# What does this PR do?
Reorders code in `load_custom_generate` such that we load the `.py` file first, and check whether it is to be used later. This way, we can cache the result of the Hub request even when the file doesn't exist.
⚠️ Since we try to load `custom_generate` at model-loading time, this results in much fewer Hub requests in situations like CIs. Some downstream CIs were hitting rate limits, in part due to this feature's uncached requests prior to this change.
______________________
✅ ran `custom_generation` tests, including existing slow tests.
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41216/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41215
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41215/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41215/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41215/events
|
https://github.com/huggingface/transformers/pull/41215
| 3,468,858,562
|
PR_kwDOCUB6oc6rTql4
| 41,215
|
Fix CLIP memory leak causing 600-800MB accumulation per batch
|
{
"login": "eun2ce",
"id": 40400092,
"node_id": "MDQ6VXNlcjQwNDAwMDky",
"avatar_url": "https://avatars.githubusercontent.com/u/40400092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eun2ce",
"html_url": "https://github.com/eun2ce",
"followers_url": "https://api.github.com/users/eun2ce/followers",
"following_url": "https://api.github.com/users/eun2ce/following{/other_user}",
"gists_url": "https://api.github.com/users/eun2ce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eun2ce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eun2ce/subscriptions",
"organizations_url": "https://api.github.com/users/eun2ce/orgs",
"repos_url": "https://api.github.com/users/eun2ce/repos",
"events_url": "https://api.github.com/users/eun2ce/events{/privacy}",
"received_events_url": "https://api.github.com/users/eun2ce/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-30T11:10:08
| 2025-10-01T13:27:15
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41215",
"html_url": "https://github.com/huggingface/transformers/pull/41215",
"diff_url": "https://github.com/huggingface/transformers/pull/41215.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41215.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes a critical memory leak in CLIP model's `get_image_features` and `get_text_features` methods that was causing significant memory accumulation during repeated inference calls.
## Problem
When using CLIP models for batch processing (e.g., in microservices), repeated calls to `get_image_features` or `get_text_features` caused memory to increase by 600-800MB per batch, eventually leading to OOM crashes. This was particularly problematic in production environments processing multiple batches sequentially.
## Root Cause
The issue was caused by holding references to full `BaseModelOutputWithPooling` objects when only the `pooler_output` tensor was needed. These large intermediate objects contained all hidden states, attentions, and other tensors that were not being properly garbage collected.
## Solution
- Explicitly delete intermediate `vision_outputs` and `text_outputs` objects after extracting the required `pooler_output`
- This allows Python's garbage collector to immediately free the memory used by unused intermediate tensors
- No API changes - existing code continues to work exactly the same
## Testing
- Created test script that verifies tensor count remains constant across multiple batches
- Confirmed fix resolves the memory accumulation issue
- No performance impact on inference speed
Fixes #41178
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amyeroberts @qubvel (vision models)
---
# [UPDATE] Further Memory Analysis Results
After deeper profiling investigation, we discovered this is actually a system-level memory fragmentation issue, matching known PyTorch/glibc behavior (refs: pytorch/pytorch#68114, pytorch/pytorch#114455).
## Detailed Memory Profiling
- Model (get_image_features): 94% of leak (374.2 MiB)
- Processor (preprocess): 6% of leak (22.7 MiB)
- Total: ~396.9 MiB per batch
## Recommended Application-Level Solution
Users experiencing significant memory issues should handle memory management at the application level:
```python
def trim_memory():
try:
import ctypes
libc = ctypes.CDLL("libc.so.6")
return libc.malloc_trim(0)
except:
return False
# Use after batch processing
features = model.get_image_features(**inputs)
del inputs, features
gc.collect()
trim_memory()
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41215/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41214
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41214/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41214/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41214/events
|
https://github.com/huggingface/transformers/pull/41214
| 3,468,737,197
|
PR_kwDOCUB6oc6rTPaM
| 41,214
|
[V5] Remove deprecated transformers.onnx
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T10:39:33
| 2025-10-01T12:25:09
| 2025-10-01T12:17:04
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41214",
"html_url": "https://github.com/huggingface/transformers/pull/41214",
"diff_url": "https://github.com/huggingface/transformers/pull/41214.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41214.patch",
"merged_at": "2025-10-01T12:17:04"
}
|
# What does this PR do?
Remove the source files, doc and the dependencies.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41214/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41213
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41213/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41213/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41213/events
|
https://github.com/huggingface/transformers/issues/41213
| 3,468,604,282
|
I_kwDOCUB6oc7OvrN6
| 41,213
|
Qwen3_Omni error inference transformer 4.57.0.dev
|
{
"login": "Tortoise17",
"id": 36593708,
"node_id": "MDQ6VXNlcjM2NTkzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36593708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tortoise17",
"html_url": "https://github.com/Tortoise17",
"followers_url": "https://api.github.com/users/Tortoise17/followers",
"following_url": "https://api.github.com/users/Tortoise17/following{/other_user}",
"gists_url": "https://api.github.com/users/Tortoise17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tortoise17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tortoise17/subscriptions",
"organizations_url": "https://api.github.com/users/Tortoise17/orgs",
"repos_url": "https://api.github.com/users/Tortoise17/repos",
"events_url": "https://api.github.com/users/Tortoise17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tortoise17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T10:06:40
| 2025-10-01T13:38:59
| 2025-10-01T08:28:11
|
NONE
| null | null | null | null |
### System Info
There is inference demo from the qwen_omni following the command
`https://github.com/QwenLM/Qwen3-Omni/blob/main/web_demo.py`
and below is error.
Trnasformer version is 4.57.0.dev directly cloned and installed.
and model used have been below
`https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct`
and the resultant error is below.
```
model = LLM(
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 282, in __init__
[rank0]: self.llm_engine = LLMEngine.from_engine_args(
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 493, in from_engine_args
[rank0]: return engine_cls.from_vllm_config(
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 469, in from_vllm_config
[rank0]: return cls(
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 260, in __init__
[rank0]: self.model_executor = executor_class(vllm_config=vllm_config)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 264, in __init__
[rank0]: super().__init__(*args, **kwargs)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 54, in __init__
[rank0]: self._init_executor()
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 126, in _init_executor
[rank0]: self._run_workers("load_model",
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/executor/mp_distributed_executor.py", line 186, in _run_workers
[rank0]: driver_worker_output = run_method(self.driver_worker, sent_method,
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/utils/__init__.py", line 3060, in run_method
[rank0]: return func(*args, **kwargs)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/worker/worker.py", line 211, in load_model
[rank0]: self.model_runner.load_model()
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1054, in load_model
[rank0]: self.model = get_model(vllm_config=self.vllm_config)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 119, in get_model
[rank0]: return loader.load_model(vllm_config=vllm_config,
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/base_loader.py", line 45, in load_model
[rank0]: model = initialize_model(vllm_config=vllm_config,
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/utils.py", line 64, in initialize_model
[rank0]: return model_class(vllm_config=vllm_config, prefix=prefix)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 199, in __init__
[rank0]: old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 792, in __init__
[rank0]: super().__init__(vllm_config=vllm_config, prefix=prefix)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 199, in __init__
[rank0]: old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 707, in __init__
[rank0]: super().__init__(vllm_config=vllm_config, prefix=prefix)
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 462, in __init__
[rank0]: self.model: PreTrainedModel = AutoModel.from_config(
[rank0]: File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 243, in from_config
[rank0]: raise ValueError(
[rank0]: ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel.
[rank0]: Model type should be one of Aimv2Config, Aimv2VisionConfig, AlbertConfig, AlignConfig, AltCLIPConfig, ApertusConfig, ArceeConfig, AriaConfig, AriaTextConfig, ASTConfig, AutoformerConfig, AyaVisionConfig, BambaConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BitNetConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, Blip2QFormerConfig, BloomConfig, BltConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChameleonConfig, ChineseCLIPConfig, ChineseCLIPVisionConfig, ClapConfig, CLIPConfig, CLIPTextConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, Cohere2VisionConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CsmConfig, CTRLConfig, CvtConfig, DFineConfig, DabDetrConfig, DacConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DbrxConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeepseekV2Config, DeepseekV3Config, DeepseekVLConfig, DeepseekVLHybridConfig, DeformableDetrConfig, DeiTConfig, DepthProConfig, DetaConfig, DetrConfig, DiaConfig, DiffLlamaConfig, DinatConfig, Dinov2Config, Dinov2WithRegistersConfig, DINOv3ConvNextConfig, DINOv3ViTConfig, DistilBertConfig, DogeConfig, DonutSwinConfig, Dots1Config, DPRConfig, DPTConfig, EfficientFormerConfig, EfficientLoFTRConfig, EfficientNetConfig, ElectraConfig, Emu3Config, EncodecConfig, ErnieConfig, Ernie4_5Config, Ernie4_5_MoeConfig, ErnieMConfig, EsmConfig, EvollaConfig, Exaone4Config, FalconConfig, FalconH1Config, FalconMambaConfig, FastSpeech2ConformerConfig, FastSpeech2ConformerWithHifiGanConfig, FlaubertConfig, FlavaConfig, FlexOlmoConfig, Florence2Config, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, Gemma3nConfig, Gemma3nAudioConfig, Gemma3nTextConfig, Gemma3nVisionConfig, GitConfig, GlmConfig, Glm4Config, Glm4MoeConfig, Glm4vConfig, Glm4vMoeConfig, Glm4vMoeTextConfig, Glm4vTextConfig, GLPNConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GptOssConfig, GPTJConfig, GPTSanJapaneseConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeHybridConfig, GraniteMoeSharedConfig, GraphormerConfig, GroundingDinoConfig, GroupViTConfig, HeliumConfig, HGNetV2Config, HieraConfig, HubertConfig, HunYuanDenseV1Config, HunYuanMoEV1Config, IBertConfig, IdeficsConfig, Idefics2Config, Idefics3Config, Idefics3VisionConfig, IJepaConfig, ImageGPTConfig, InformerConfig, InstructBlipConfig, InstructBlipVideoConfig, InternVLConfig, InternVLVisionConfig, JambaConfig, JanusConfig, JetMoeConfig, JukeboxConfig, Kosmos2Config, Kosmos2_5Config, KyutaiSpeechToTextConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, Lfm2Config, Lfm2VlConfig, LightGlueConfig, LiltConfig, LlamaConfig, Llama4Config, Llama4TextConfig, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, LongcatFlashConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MambaConfig, Mamba2Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MetaClip2Config, MgpstrConfig, MimiConfig, MiniMaxConfig, MinistralConfig, MistralConfig, Mistral3Config, MixtralConfig, MLCDVisionConfig, MllamaConfig, MMGroundingDinoConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, ModernBertConfig, ModernBertDecoderConfig, MoonshineConfig, MoshiConfig, MPNetConfig, MptConfig, MraConfig, MT5Config, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NatConfig, NemotronConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OlmoConfig, Olmo2Config, Olmo3Config, OlmoeConfig, OmDetTurboConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Ovis2Config, Owlv2Config, OwlViTConfig, PaliGemmaConfig, ParakeetCTCConfig, ParakeetEncoderConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, TimmWrapperConfig, PerceptionLMConfig, PersimmonConfig, PhiConfig, Phi3Config, Phi4MultimodalConfig, PhimoeConfig, PixtralVisionConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, PvtV2Config, QDQBertConfig, Qwen2Config, Qwen2_5_VLConfig, Qwen2_5_VLTextConfig, Qwen2AudioEncoderConfig, Qwen2MoeConfig, Qwen2VLConfig, Qwen2VLTextConfig, Qwen3Config, Qwen3MoeConfig, Qwen3NextConfig, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeTextConfig, Qwen3VLTextConfig, RecurrentGemmaConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RTDetrConfig, RTDetrV2Config, RwkvConfig, SamConfig, Sam2Config, Sam2HieraDetConfig, Sam2VideoConfig, Sam2VisionConfig, SamHQConfig, SamHQVisionConfig, SamVisionConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SeedOssConfig, SegformerConfig, SegGptConfig, SEWConfig, SEWDConfig, SiglipConfig, Siglip2Config, Siglip2VisionConfig, SiglipVisionConfig, SmolLM3Config, SmolVLMConfig, SmolVLMVisionConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, StableLmConfig, Starcoder2Config, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, T5GemmaConfig, TableTransformerConfig, TapasConfig, TextNetConfig, TimeSeriesTransformerConfig, TimesFmConfig, TimesformerConfig, TimmBackboneConfig, TimmWrapperConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UdopConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VaultGemmaConfig, VideoLlavaConfig, VideoMAEConfig, ViltConfig, VipLlavaConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, VJEPA2Config, VoxtralConfig, VoxtralEncoderConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XcodecConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, xLSTMConfig, XmodConfig, YolosConfig, YosoConfig, ZambaConfig, Zamba2Config.
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] Traceback (most recent call last):
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 226, in _run_worker_process
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/utils/__init__.py", line 3060, in run_method
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] return func(*args, **kwargs)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/worker/worker.py", line 211, in load_model
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] self.model_runner.load_model()
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1054, in load_model
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 119, in get_model
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] return loader.load_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/base_loader.py", line 45, in load_model
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] model = initialize_model(vllm_config=vllm_config,
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/model_loader/utils.py", line 64, in initialize_model
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 199, in __init__
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 792, in __init__
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/compilation/decorators.py", line 199, in __init__
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 707, in __init__
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] super().__init__(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/vllm/model_executor/models/transformers.py", line 462, in __init__
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] self.model: PreTrainedModel = AutoModel.from_config(
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] File "/home/user/allwork/envs/omni3/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 243, in from_config
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] raise ValueError(
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel.
(VllmWorkerProcess pid=3725040) ERROR 09-30 11:01:18 [multiproc_worker_utils.py:232] Model type should be one of Aimv2Config, Aimv2VisionConfig, AlbertConfig, AlignConfig, AltCLIPConfig, ApertusConfig, ArceeConfig, AriaConfig, AriaTextConfig, ASTConfig, AutoformerConfig, AyaVisionConfig, BambaConfig, BarkConfig, BartConfig, BeitConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BitConfig, BitNetConfig, BlenderbotConfig, BlenderbotSmallConfig, BlipConfig, Blip2Config, Blip2QFormerConfig, BloomConfig, BltConfig, BridgeTowerConfig, BrosConfig, CamembertConfig, CanineConfig, ChameleonConfig, ChineseCLIPConfig, ChineseCLIPVisionConfig, ClapConfig, CLIPConfig, CLIPTextConfig, CLIPVisionConfig, CLIPSegConfig, ClvpConfig, LlamaConfig, CodeGenConfig, CohereConfig, Cohere2Config, Cohere2VisionConfig, ConditionalDetrConfig, ConvBertConfig, ConvNextConfig, ConvNextV2Config, CpmAntConfig, CsmConfig, CTRLConfig, CvtConfig, DFineConfig, DabDetrConfig, DacConfig, Data2VecAudioConfig, Data2VecTextConfig, Data2VecVisionConfig, DbrxConfig, DebertaConfig, DebertaV2Config, DecisionTransformerConfig, DeepseekV2Config, DeepseekV3Config, DeepseekVLConfig, DeepseekVLHybridConfig, DeformableDetrConfig, DeiTConfig, DepthProConfig, DetaConfig, DetrConfig, DiaConfig, DiffLlamaConfig, DinatConfig, Dinov2Config, Dinov2WithRegistersConfig, DINOv3ConvNextConfig, DINOv3ViTConfig, DistilBertConfig, DogeConfig, DonutSwinConfig, Dots1Config, DPRConfig, DPTConfig, EfficientFormerConfig, EfficientLoFTRConfig, EfficientNetConfig, ElectraConfig, Emu3Config, EncodecConfig, ErnieConfig, Ernie4_5Config, Ernie4_5_MoeConfig, ErnieMConfig, EsmConfig, EvollaConfig, Exaone4Config, FalconConfig, FalconH1Config, FalconMambaConfig, FastSpeech2ConformerConfig, FastSpeech2ConformerWithHifiGanConfig, FlaubertConfig, FlavaConfig, FlexOlmoConfig, Florence2Config, FNetConfig, FocalNetConfig, FSMTConfig, FunnelConfig, FuyuConfig, GemmaConfig, Gemma2Config, Gemma3Config, Gemma3TextConfig, Gemma3nConfig, Gemma3nAudioConfig, Gemma3nTextConfig, Gemma3nVisionConfig, GitConfig, GlmConfig, Glm4Config, Glm4MoeConfig, Glm4vConfig, Glm4vMoeConfig, Glm4vMoeTextConfig, Glm4vTextConfig, GLPNConfig, GotOcr2Config, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GptOssConfig, GPTJConfig, GPTSanJapaneseConfig, GraniteConfig, GraniteMoeConfig, GraniteMoeHybridConfig, GraniteMoeSharedConfig, GraphormerConfig, GroundingDinoConfig, GroupViTConfig, HeliumConfig, HGNetV2Config, HieraConfig, HubertConfig, HunYuanDenseV1Config, HunYuanMoEV1Config, IBertConfig, IdeficsConfig, Idefics2Config, Idefics3Config, Idefics3VisionConfig, IJepaConfig, ImageGPTConfig, InformerConfig, InstructBlipConfig, InstructBlipVideoConfig, InternVLConfig, InternVLVisionConfig, JambaConfig, JanusConfig, JetMoeConfig, JukeboxConfig, Kosmos2Config, Kosmos2_5Config, KyutaiSpeechToTextConfig, LayoutLMConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LevitConfig, Lfm2Config, Lfm2VlConfig, LightGlueConfig, LiltConfig, LlamaConfig, Llama4Config, Llama4TextConfig, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, LongcatFlashConfig, LongformerConfig, LongT5Config, LukeConfig, LxmertConfig, M2M100Config, MambaConfig, Mamba2Config, MarianConfig, MarkupLMConfig, Mask2FormerConfig, MaskFormerConfig, MaskFormerSwinConfig, MBartConfig, MCTCTConfig, MegaConfig, MegatronBertConfig, MetaClip2Config, MgpstrConfig, MimiConfig, MiniMaxConfig, MinistralConfig, MistralConfig, Mistral3Config, MixtralConfig, MLCDVisionConfig, MllamaConfig, MMGroundingDinoConfig, MobileBertConfig, MobileNetV1Config, MobileNetV2Config, MobileViTConfig, MobileViTV2Config, ModernBertConfig, ModernBertDecoderConfig, MoonshineConfig, MoshiConfig, MPNetConfig, MptConfig, MraConfig, MT5Config, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NatConfig, NemotronConfig, NezhaConfig, NllbMoeConfig, NystromformerConfig, OlmoConfig, Olmo2Config, Olmo3Config, OlmoeConfig, OmDetTurboConfig, OneFormerConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, Ovis2Config, Owlv2Config, OwlViTConfig, PaliGemmaConfig, ParakeetCTCConfig, ParakeetEncoderConfig, PatchTSMixerConfig, PatchTSTConfig, PegasusConfig, PegasusXConfig, PerceiverConfig, TimmWrapperConfig, PerceptionLMConfig, PersimmonConfig, PhiConfig, Phi3Config, Phi4MultimodalConfig, PhimoeConfig, PixtralVisionConfig, PLBartConfig, PoolFormerConfig, ProphetNetConfig, PvtConfig, PvtV2Config, QDQBertConfig, Qwen2Config, Qwen2_5_VLConfig, Qwen2_5_VLTextConfig, Qwen2AudioEncoderConfig, Qwen2MoeConfig, Qwen2VLConfig, Qwen2VLTextConfig, Qwen3Config, Qwen3MoeConfig, Qwen3NextConfig, Qwen3VLConfig, Qwen3VLMoeConfig, Qwen3VLMoeTextConfig, Qwen3VLTextConfig, RecurrentGemmaConfig, ReformerConfig, RegNetConfig, RemBertConfig, ResNetConfig, RetriBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RTDetrConfig, RTDetrV2Config, RwkvConfig, SamConfig, Sam2Config, Sam2HieraDetConfig, Sam2VideoConfig, Sam2VisionConfig, SamHQConfig, SamHQVisionConfig, SamVisionConfig, SeamlessM4TConfig, SeamlessM4Tv2Config, SeedOssConfig, SegformerConfig, SegGptConfig, SEWConfig, SEWDConfig, SiglipConfig, Siglip2Config, Siglip2VisionConfig, SiglipVisionConfig, SmolLM3Config, SmolVLMConfig, SmolVLMVisionConfig, Speech2TextConfig, SpeechT5Config, SplinterConfig, SqueezeBertConfig, StableLmConfig, Starcoder2Config, SwiftFormerConfig, SwinConfig, Swin2SRConfig, Swinv2Config, SwitchTransformersConfig, T5Config, T5GemmaConfig, TableTransformerConfig, TapasConfig, TextNetConfig, TimeSeriesTransformerConfig, TimesFmConfig, TimesformerConfig, TimmBackboneConfig, TimmWrapperConfig, TrajectoryTransformerConfig, TransfoXLConfig, TvltConfig, TvpConfig, UdopConfig, UMT5Config, UniSpeechConfig, UniSpeechSatConfig, UnivNetConfig, VanConfig, VaultGemmaConfig, VideoLlavaConfig, VideoMAEConfig, ViltConfig, VipLlavaConfig, VisionTextDualEncoderConfig, VisualBertConfig, ViTConfig, ViTHybridConfig, ViTMAEConfig, ViTMSNConfig, VitDetConfig, VitsConfig, VivitConfig, VJEPA2Config, VoxtralConfig, VoxtralEncoderConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig, WhisperConfig, XCLIPConfig, XcodecConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, xLSTMConfig, XmodConfig, YolosConfig, YosoConfig, ZambaConfig, Zamba2Config.
ERROR 09-30 11:01:18 [multiproc_worker_utils.py:116] Worker VllmWorkerProcess pid 3725040 died, exit code: -15
INFO 09-30 11:01:18 [multiproc_worker_utils.py:120] Killing local vLLM worker processes
[rank0]:[W930 11:01:19.644779911 ProcessGroupNCCL.cpp:1538] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
/home/user/allwork/envs/omni3/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Creating env name with python 3.10
Installing envs,
Installing recent available transformers from git
Downloading the `https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct` locally
Providing path of the model locally available in `web_demo.py` file
While in the envs
`python web_demo.py`
and below is error.
### Expected behavior
Executing the correct version and should run locally
like below
`https://huggingface.co/spaces/Qwen/Qwen2.5-Omni-7B-Demo`
|
{
"login": "Tortoise17",
"id": 36593708,
"node_id": "MDQ6VXNlcjM2NTkzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/36593708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tortoise17",
"html_url": "https://github.com/Tortoise17",
"followers_url": "https://api.github.com/users/Tortoise17/followers",
"following_url": "https://api.github.com/users/Tortoise17/following{/other_user}",
"gists_url": "https://api.github.com/users/Tortoise17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tortoise17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tortoise17/subscriptions",
"organizations_url": "https://api.github.com/users/Tortoise17/orgs",
"repos_url": "https://api.github.com/users/Tortoise17/repos",
"events_url": "https://api.github.com/users/Tortoise17/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tortoise17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41213/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41212
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41212/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41212/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41212/events
|
https://github.com/huggingface/transformers/pull/41212
| 3,468,555,904
|
PR_kwDOCUB6oc6rSm-2
| 41,212
|
Add EoMT with DINOv3 backbone
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-30T09:55:46
| 2025-10-20T10:18:43
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41212",
"html_url": "https://github.com/huggingface/transformers/pull/41212",
"diff_url": "https://github.com/huggingface/transformers/pull/41212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41212.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds EoMT with a DINOv3 backbone. The authors of EoMT released new checkpoints which swap the DINOv2 backbone by the newer DINOv3: https://github.com/tue-mps/eomt/blob/master/model_zoo/dinov3.md
## Disclaimer
This PR was implemented using OpenAI Codex and further improved by me.
## Who can review?
@yonigozlan
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41212/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41211
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41211/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41211/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41211/events
|
https://github.com/huggingface/transformers/issues/41211
| 3,468,498,317
|
I_kwDOCUB6oc7OvRWN
| 41,211
|
Add DEIMv2
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-30T09:43:07
| 2025-10-04T18:44:06
| null |
CONTRIBUTOR
| null | null | null | null |
### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Code: https://github.com/Intellindust-AI-Lab/DEIMv2
Weights (on Google Drive for now): https://github.com/Intellindust-AI-Lab/DEIMv2?tab=readme-ov-file#1-model-zoo
Ideally, the [AutoBackbone API](https://huggingface.co/docs/transformers/main_classes/backbones) can be leveraged to not having to re-implement the entire DINOv3 backbone in `modular_deimv2.py` and `modeling_deimv2.py`. See an example of how this is leveraged for DETR [here](https://github.com/huggingface/transformers/blob/59035fd0e1876f9e526488b61fe43ff8829059f6/src/transformers/models/detr/modeling_detr.py#L280).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41211/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41210
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41210/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41210/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41210/events
|
https://github.com/huggingface/transformers/issues/41210
| 3,468,216,779
|
I_kwDOCUB6oc7OuMnL
| 41,210
|
Qwen3-Next tokenizer fails to apply chat template for openai-style message inputs
|
{
"login": "felixzhu555",
"id": 79335195,
"node_id": "MDQ6VXNlcjc5MzM1MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/79335195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixzhu555",
"html_url": "https://github.com/felixzhu555",
"followers_url": "https://api.github.com/users/felixzhu555/followers",
"following_url": "https://api.github.com/users/felixzhu555/following{/other_user}",
"gists_url": "https://api.github.com/users/felixzhu555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixzhu555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixzhu555/subscriptions",
"organizations_url": "https://api.github.com/users/felixzhu555/orgs",
"repos_url": "https://api.github.com/users/felixzhu555/repos",
"events_url": "https://api.github.com/users/felixzhu555/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixzhu555/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T08:41:32
| 2025-09-30T13:54:36
| 2025-09-30T13:54:36
|
NONE
| null | null | null | null |
### System Info
```
transformers env
- `transformers` version: 4.56.2
- Platform: Linux-5.4.0-1150-aws-fips-x86_64-with-glibc2.31
- Python version: 3.11.13
- Huggingface_hub version: 0.35.1
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
```
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running this code with [openai-style](https://github.com/openai/openai-python/blob/main/src/openai/types/chat/chat_completion_content_part_text_param.py) chat input
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Next-80B-A3B-Instruct")
chat = [{'role': 'user', 'content': [{'type': 'text', 'text': 'hello world!'}]}]
print(tokenizer.apply_chat_template(chat, chat_template=None, tokenize=False))
```
yields
```
<|im_start|>user
<|im_end|>
```
where we clearly see the actual message content is completely ignored by `apply_chat_template`.
Not sure if this is intentional, since the tokenizer for Qwen3-Next resolves to Qwen2TokenizerFast, which maybe just doesn't support this type of chat input?
### Expected behavior
I would expect the output to be
```
<|im_start|>user
hello world<|im_end|>
```
i.e. the same as if the chat input is `[{'role': 'user', 'content': 'hello world'}]`.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41210/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41209
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41209/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41209/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41209/events
|
https://github.com/huggingface/transformers/issues/41209
| 3,468,183,884
|
I_kwDOCUB6oc7OuElM
| 41,209
|
Why it's easy to get nan for A_log in Qwen3Next
|
{
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-30T08:33:17
| 2025-10-01T11:25:18
| null |
CONTRIBUTOR
| null | null | null | null |
### System Info
main branch to use Qwen3Next
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Tried many times for training, but it's really easy to get a nan value in A_log for some layer. Does anyone know the cause of this kinda instability?
```python
In [5]: for i, param in enumerate(a['shadow_params']):
...: if param.isnan().any():
...: print(i, param)
...:
237 tensor([ 2.5156, 1.8125, 1.4062, 0.1504, 2.0000, 2.3594, 1.9375, 1.8438,
1.3047, 0.8633, 2.5938, 2.2500, 2.5000, 0.4414, 1.8516, -0.3848,
2.2812, 2.7500, nan, 1.5078, 2.5469, 1.6406, 2.5000, 1.5391,
2.4219, 1.6953, 1.4609, 1.0625, 0.4199, 2.5156, 2.0156, 2.4375],
dtype=torch.bfloat16)
In [6]: a['parameter_names'][237]
Out[6]: 'transformer.model.layers.20.linear_attn.A_log'
```
### Expected behavior
Normal training
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41209/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41208
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41208/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41208/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41208/events
|
https://github.com/huggingface/transformers/issues/41208
| 3,468,029,235
|
I_kwDOCUB6oc7Ote0z
| 41,208
|
Integrate mamba SSM kernels from the hub
|
{
"login": "romitjain",
"id": 11757603,
"node_id": "MDQ6VXNlcjExNzU3NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11757603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/romitjain",
"html_url": "https://github.com/romitjain",
"followers_url": "https://api.github.com/users/romitjain/followers",
"following_url": "https://api.github.com/users/romitjain/following{/other_user}",
"gists_url": "https://api.github.com/users/romitjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/romitjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/romitjain/subscriptions",
"organizations_url": "https://api.github.com/users/romitjain/orgs",
"repos_url": "https://api.github.com/users/romitjain/repos",
"events_url": "https://api.github.com/users/romitjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/romitjain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-30T07:50:52
| 2025-10-13T09:08:35
| null |
CONTRIBUTOR
| null | null | null | null |
### Feature request
Currently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)
Can we migrate this to use the kernels-hub (`kernels-community/mamba-ssm`) variation instead?
### Motivation
Removes the external dependency. Kernel hub is also integrated at several other places throughout the library.
### Your contribution
I can submit a PR for migrating from the PyPi `mamba_ssm` package to the `kernels` package for mamba ops.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41208/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41207
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41207/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41207/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41207/events
|
https://github.com/huggingface/transformers/pull/41207
| 3,467,969,717
|
PR_kwDOCUB6oc6rQm8h
| 41,207
|
Avoid assumption that model has config attribute in deepspeed
|
{
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-30T07:31:27
| 2025-09-30T09:42:51
| 2025-09-30T09:42:51
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41207",
"html_url": "https://github.com/huggingface/transformers/pull/41207",
"diff_url": "https://github.com/huggingface/transformers/pull/41207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41207.patch",
"merged_at": "2025-09-30T09:42:51"
}
|
# What does this PR do?
This avoids the assumption that a model has a `config` attribute, which results in a cleaner error message for downstream usage of the Trainer, e.g. with Sentence Transformers. Specifically, it would help with https://github.com/UKPLab/sentence-transformers/issues/3531
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @SunMarc
- Tom Aarsen
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41207/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41206
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41206/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41206/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41206/events
|
https://github.com/huggingface/transformers/issues/41206
| 3,466,286,854
|
I_kwDOCUB6oc7Om1cG
| 41,206
|
LLaVA-Gemma processor is bugged
|
{
"login": "nhatkhtn",
"id": 61368343,
"node_id": "MDQ6VXNlcjYxMzY4MzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/61368343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nhatkhtn",
"html_url": "https://github.com/nhatkhtn",
"followers_url": "https://api.github.com/users/nhatkhtn/followers",
"following_url": "https://api.github.com/users/nhatkhtn/following{/other_user}",
"gists_url": "https://api.github.com/users/nhatkhtn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nhatkhtn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nhatkhtn/subscriptions",
"organizations_url": "https://api.github.com/users/nhatkhtn/orgs",
"repos_url": "https://api.github.com/users/nhatkhtn/repos",
"events_url": "https://api.github.com/users/nhatkhtn/events{/privacy}",
"received_events_url": "https://api.github.com/users/nhatkhtn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T19:17:35
| 2025-09-30T10:04:44
| 2025-09-30T10:04:44
|
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.1
- Platform: Linux-5.14.0-503.40.1.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.11
- Huggingface_hub version: 0.35.0
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA B200
### Who can help?
I think @zucchini-nlp can help with this.
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following (official) code snippet from the [model card](https://huggingface.co/Intel/llava-gemma-2b):
```
import requests
from PIL import Image
from transformers import (
LlavaForConditionalGeneration,
AutoTokenizer,
AutoProcessor,
CLIPImageProcessor
)
#In this repo, needed for version < 4.41.1
#from processing_llavagemma import LlavaGemmaProcessor
#processor = LlavaGemmaProcessor( tokenizer=AutoTokenizer.from_pretrained(checkpoint), image_processor=CLIPImageProcessor.from_pretrained(checkpoint))
checkpoint = "Intel/llava-gemma-2b"
# Load model
model = LlavaForConditionalGeneration.from_pretrained(checkpoint)
processor = AutoProcessor.from_pretrained(checkpoint)
# Prepare inputs
# Use gemma chat template
prompt = processor.tokenizer.apply_chat_template(
[{'role': 'user', 'content': "<image>\nWhat's the content of the image?"}],
tokenize=False,
add_generation_prompt=True
)
url = "https://www.ilankelman.org/stopsigns/australia.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
# Generate
generate_ids = model.generate(**inputs, max_length=30)
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
```
The following error will show up:
```
TypeError Traceback (most recent call last)
Cell In[46], [line 32](vscode-notebook-cell:?execution_count=46&line=32)
30 url = "https://www.ilankelman.org/stopsigns/australia.jpg"
31 image = Image.open(requests.get(url, stream=True).raw)
---> [32](vscode-notebook-cell:?execution_count=46&line=32) inputs = processor(text=prompt, images=image, return_tensors="pt").to(device)
34 # Generate
35 generate_ids = model.generate(**inputs)
File .../transformers/models/llava/processing_llava.py:156, in LlavaProcessor.__call__(self, images, text, audio, videos, **kwargs)
154 pixel_values = image_inputs["pixel_values"]
155 height, width = get_image_size(to_numpy_array(pixel_values[0]))
--> [156](.../transformers/models/llava/processing_llava.py:156) num_image_tokens = (height // self.patch_size) * (
157 width // self.patch_size
158 ) + self.num_additional_image_tokens
159 if self.vision_feature_select_strategy == "default":
160 num_image_tokens -= 1
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'
```
Comparing this repo and the LLaVA 1.5 repo, it seems that the `processor_config.json` file is missing. Adding the following code makes the model (seemingly) work:
```
processor.num_additional_image_tokens = 1
processor.patch_size = 14
processor.vision_feature_select_strategy = "default"
```
### Expected behavior
The code should run as-is without errors.
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41206/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41205
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41205/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41205/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41205/events
|
https://github.com/huggingface/transformers/pull/41205
| 3,466,063,315
|
PR_kwDOCUB6oc6rKL68
| 41,205
|
[docs] Fix tp_plan
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T18:12:46
| 2025-09-30T16:27:52
| 2025-09-30T16:27:50
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41205",
"html_url": "https://github.com/huggingface/transformers/pull/41205",
"diff_url": "https://github.com/huggingface/transformers/pull/41205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41205.patch",
"merged_at": "2025-09-30T16:27:50"
}
|
Fixes https://github.com/huggingface/transformers/issues/41189 and removes mention of manual `tp_plan`
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41205/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41204
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41204/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41204/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41204/events
|
https://github.com/huggingface/transformers/pull/41204
| 3,465,933,586
|
PR_kwDOCUB6oc6rJvxs
| 41,204
|
Fix multi-video timestamp bug in Qwen-3-VL
|
{
"login": "tim120526",
"id": 43242086,
"node_id": "MDQ6VXNlcjQzMjQyMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/43242086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tim120526",
"html_url": "https://github.com/tim120526",
"followers_url": "https://api.github.com/users/tim120526/followers",
"following_url": "https://api.github.com/users/tim120526/following{/other_user}",
"gists_url": "https://api.github.com/users/tim120526/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tim120526/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tim120526/subscriptions",
"organizations_url": "https://api.github.com/users/tim120526/orgs",
"repos_url": "https://api.github.com/users/tim120526/repos",
"events_url": "https://api.github.com/users/tim120526/events{/privacy}",
"received_events_url": "https://api.github.com/users/tim120526/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T17:31:57
| 2025-09-30T14:50:01
| 2025-09-30T14:30:54
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41204",
"html_url": "https://github.com/huggingface/transformers/pull/41204",
"diff_url": "https://github.com/huggingface/transformers/pull/41204.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41204.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes a timestamp placeholder error that occurs when multiple videos are included in a single text input.
In `Qwen3-VLProcessor`, frames are sampled from videos, and the relative timestamp placeholder `"<{curr_time:.1f} seconds>"` is constructed for each frame to indicate its corresponding time information. Previously, the code used the index `i` to retrieve the video metadata (`metadata = video_metadata[i]`), which only worked correctly for a single video input. When multiple videos were provided, the metadata of the first video was incorrectly used to generate timestamp placeholders for all the other videos, resulting in errors.
This PR replaces the index `i` with the correct video index `index` when retrieving metadata, ensuring that each video use the appropriate timestamp placeholders.
Hi @zach-huggingface @SunMarc, I am just following up on this bug and the proposed fix. It is a one-line change, and I would be happy to assist with any questions or further clarifications to help get it merged.
Thank you very much!
|
{
"login": "tim120526",
"id": 43242086,
"node_id": "MDQ6VXNlcjQzMjQyMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/43242086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tim120526",
"html_url": "https://github.com/tim120526",
"followers_url": "https://api.github.com/users/tim120526/followers",
"following_url": "https://api.github.com/users/tim120526/following{/other_user}",
"gists_url": "https://api.github.com/users/tim120526/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tim120526/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tim120526/subscriptions",
"organizations_url": "https://api.github.com/users/tim120526/orgs",
"repos_url": "https://api.github.com/users/tim120526/repos",
"events_url": "https://api.github.com/users/tim120526/events{/privacy}",
"received_events_url": "https://api.github.com/users/tim120526/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41204/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41204/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41203
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41203/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41203/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41203/events
|
https://github.com/huggingface/transformers/issues/41203
| 3,465,917,535
|
I_kwDOCUB6oc7OlbRf
| 41,203
|
[audio] standardization tracker
|
{
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-29T17:26:12
| 2025-09-29T17:26:12
| null |
CONTRIBUTOR
| null | null | null | null |
This issue is a tracker for audio-related standardisation efforts across the library.
_**to be updated...**_
<img width="2000" height="1000" alt="Image" src="https://github.com/user-attachments/assets/383bcebd-0e98-4af1-a076-473e3f7c41f4" />
Efforts:
- [ ] #41202
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41203/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41203/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41202
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41202/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41202/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41202/events
|
https://github.com/huggingface/transformers/pull/41202
| 3,465,882,537
|
PR_kwDOCUB6oc6rJk7H
| 41,202
|
[WIP] standardize audio kwargs
|
{
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-29T17:13:45
| 2025-09-29T17:22:42
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41202",
"html_url": "https://github.com/huggingface/transformers/pull/41202",
"diff_url": "https://github.com/huggingface/transformers/pull/41202.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41202.patch",
"merged_at": null
}
|
# What does this PR do?
This PR standardizes audio-related kwargs naming across the library to ensure future integrations follow consistent best practices.
A dedicated decorator is introduced to maintain backward compatibility, with an INFO-level log message for visibility.
> [!NOTE]
> While renaming `input_features` to `audio_spectrogram` may appear unnecessarily breaking, it is worth noting that `input_features` is **always** used for audio spectrograms throughout the library.
> Our naming conventions should be explicit and aligned with related libraries (e.g., torchaudio, torchcodec, vLLM, OpenAI API). Let's improve! and ensure we deprecate in a user-friendly manner. 🤗
## Breakdown
### Modeling kwargs
- `input_values` → `audio`
- `input_features` → `audio_spectrogram` (supports classical, mel, log-mel, MFCC)
### Feature extractor kwargs
- `sampling_rate` → `sample_rate`
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41202/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41202/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41201
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41201/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41201/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41201/events
|
https://github.com/huggingface/transformers/pull/41201
| 3,465,736,552
|
PR_kwDOCUB6oc6rJFx9
| 41,201
|
Fix docker quantization
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T16:26:04
| 2025-09-29T16:37:05
| 2025-09-29T16:36:30
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41201",
"html_url": "https://github.com/huggingface/transformers/pull/41201",
"diff_url": "https://github.com/huggingface/transformers/pull/41201.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41201.patch",
"merged_at": "2025-09-29T16:36:30"
}
|
# What does this PR do?
As per title
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41201/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41200
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41200/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41200/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41200/events
|
https://github.com/huggingface/transformers/pull/41200
| 3,465,647,831
|
PR_kwDOCUB6oc6rIyo1
| 41,200
|
Fix 8bit bnb loading
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T15:59:34
| 2025-09-29T16:34:48
| 2025-09-29T16:34:46
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41200",
"html_url": "https://github.com/huggingface/transformers/pull/41200",
"diff_url": "https://github.com/huggingface/transformers/pull/41200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41200.patch",
"merged_at": "2025-09-29T16:34:46"
}
|
# What does this PR do?
This PR fixes some code related to 8-bit models that was changed during a recent refactor https://github.com/huggingface/transformers/pull/41138. The issue was that `SCB` value was actually modified when moving the param from cpu to cuda.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41200/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41199
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41199/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41199/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41199/events
|
https://github.com/huggingface/transformers/pull/41199
| 3,465,181,298
|
PR_kwDOCUB6oc6rHMrD
| 41,199
|
Add Code World Model (CWM)
|
{
"login": "jacobkahn",
"id": 7871817,
"node_id": "MDQ6VXNlcjc4NzE4MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7871817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobkahn",
"html_url": "https://github.com/jacobkahn",
"followers_url": "https://api.github.com/users/jacobkahn/followers",
"following_url": "https://api.github.com/users/jacobkahn/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobkahn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobkahn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobkahn/subscriptions",
"organizations_url": "https://api.github.com/users/jacobkahn/orgs",
"repos_url": "https://api.github.com/users/jacobkahn/repos",
"events_url": "https://api.github.com/users/jacobkahn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobkahn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T14:02:49
| 2025-10-09T15:57:46
| 2025-10-09T15:57:45
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41199",
"html_url": "https://github.com/huggingface/transformers/pull/41199",
"diff_url": "https://github.com/huggingface/transformers/pull/41199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41199.patch",
"merged_at": "2025-10-09T15:57:45"
}
|
Adds the Code World Model (CWM) - https://ai.meta.com/research/publications/cwm-an-open-weights-llm-for-research-on-code-generation-with-world-models/
High-level implementation details:
- This is a GQA + local/global sliding window attention model
- Implemented in HF Llama3 + interleaved sliding window attention
- Inheriting from Gemma2/3 requires weight remapping which breaks VLLM compatibility and other components, so this is implemented using the existing causal mask utils from HF
The model repos are:
- https://huggingface.co/facebook/cwm
- https://huggingface.co/facebook/cwm-sft
- https://huggingface.co/facebook/cwm-pretrain
Note that for VLLM compatibility, model `config.json` still refer to `Llama3ForCausalLM` and a `llama` `model_type` — see [example](https://huggingface.co/facebook/cwm/blob/main/config.json). https://github.com/vllm-project/vllm/pull/25611 adds support mapping `CwmForCausalLM` to the Llama3 model class in VLLM since VLLM supports `Llama3` + `layer_types` with local/global attention - see [docs](https://docs.vllm.ai/en/latest/contributing/model/basic.html#how-to-support-models-with-interleaving-sliding-windows). The model type in the `config.json` will be updated on HF (and the special automapping condition removed) once this PR is merged and a Transformers release has happened containing the `CwmForCausalLM` model class.
@ArthurZucker, @zucchini-nlp
Supersedes https://github.com/huggingface/transformers/pull/41188 due to some fork misery
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41199/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/transformers/issues/41199/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41198
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41198/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41198/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41198/events
|
https://github.com/huggingface/transformers/pull/41198
| 3,465,039,570
|
PR_kwDOCUB6oc6rGtIL
| 41,198
|
Remove unnecessary Optional typing
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T13:30:03
| 2025-09-30T08:50:00
| 2025-09-30T08:38:05
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41198",
"html_url": "https://github.com/huggingface/transformers/pull/41198",
"diff_url": "https://github.com/huggingface/transformers/pull/41198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41198.patch",
"merged_at": "2025-09-30T08:38:05"
}
|
# What does this PR do?
As the title says
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41198/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41197
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41197/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41197/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41197/events
|
https://github.com/huggingface/transformers/pull/41197
| 3,464,983,656
|
PR_kwDOCUB6oc6rGgr6
| 41,197
|
[Hack] Support config file for `DeepSeek-V3.2`
|
{
"login": "hnyls2002",
"id": 95566987,
"node_id": "U_kgDOBbI8iw",
"avatar_url": "https://avatars.githubusercontent.com/u/95566987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hnyls2002",
"html_url": "https://github.com/hnyls2002",
"followers_url": "https://api.github.com/users/hnyls2002/followers",
"following_url": "https://api.github.com/users/hnyls2002/following{/other_user}",
"gists_url": "https://api.github.com/users/hnyls2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hnyls2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hnyls2002/subscriptions",
"organizations_url": "https://api.github.com/users/hnyls2002/orgs",
"repos_url": "https://api.github.com/users/hnyls2002/repos",
"events_url": "https://api.github.com/users/hnyls2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/hnyls2002/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T13:16:16
| 2025-10-01T12:43:22
| 2025-10-01T12:43:21
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41197",
"html_url": "https://github.com/huggingface/transformers/pull/41197",
"diff_url": "https://github.com/huggingface/transformers/pull/41197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41197.patch",
"merged_at": null
}
|
This is a hack PR to support `DeepseekV32Config` as the key `deepseek_v32` is missing in transformers. Also see #41196 and https://github.com/sgl-project/sglang/issues/11060#issuecomment-3346459440
|
{
"login": "hnyls2002",
"id": 95566987,
"node_id": "U_kgDOBbI8iw",
"avatar_url": "https://avatars.githubusercontent.com/u/95566987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hnyls2002",
"html_url": "https://github.com/hnyls2002",
"followers_url": "https://api.github.com/users/hnyls2002/followers",
"following_url": "https://api.github.com/users/hnyls2002/following{/other_user}",
"gists_url": "https://api.github.com/users/hnyls2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hnyls2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hnyls2002/subscriptions",
"organizations_url": "https://api.github.com/users/hnyls2002/orgs",
"repos_url": "https://api.github.com/users/hnyls2002/repos",
"events_url": "https://api.github.com/users/hnyls2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/hnyls2002/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41197/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41197/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41196
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41196/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41196/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41196/events
|
https://github.com/huggingface/transformers/issues/41196
| 3,464,764,517
|
I_kwDOCUB6oc7OhBxl
| 41,196
|
[Model] Support Deepseek-V3.2-Exp
|
{
"login": "hnyls2002",
"id": 95566987,
"node_id": "U_kgDOBbI8iw",
"avatar_url": "https://avatars.githubusercontent.com/u/95566987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hnyls2002",
"html_url": "https://github.com/hnyls2002",
"followers_url": "https://api.github.com/users/hnyls2002/followers",
"following_url": "https://api.github.com/users/hnyls2002/following{/other_user}",
"gists_url": "https://api.github.com/users/hnyls2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hnyls2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hnyls2002/subscriptions",
"organizations_url": "https://api.github.com/users/hnyls2002/orgs",
"repos_url": "https://api.github.com/users/hnyls2002/repos",
"events_url": "https://api.github.com/users/hnyls2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/hnyls2002/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-29T12:21:18
| 2025-10-01T12:54:28
| null |
NONE
| null | null | null | null |
### Feature request
At least support the config, which can be found in https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp/blob/main/config.json
### Motivation
So we can natively use the model config from HF to deploy DeepSeek-V3.2 in SGLang without a hacky patch.
### Your contribution
Only support the config class in #41197
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41196/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41195
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41195/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41195/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41195/events
|
https://github.com/huggingface/transformers/pull/41195
| 3,464,741,625
|
PR_kwDOCUB6oc6rFrSm
| 41,195
|
Use accelerator API to free device memory
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T12:15:56
| 2025-10-08T10:12:26
| 2025-10-08T10:11:19
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41195",
"html_url": "https://github.com/huggingface/transformers/pull/41195",
"diff_url": "https://github.com/huggingface/transformers/pull/41195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41195.patch",
"merged_at": "2025-10-08T10:11:19"
}
|
# What does this PR do?
To simplify device related code.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41195/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41194
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41194/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41194/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41194/events
|
https://github.com/huggingface/transformers/pull/41194
| 3,464,440,780
|
PR_kwDOCUB6oc6rEngR
| 41,194
|
Don't convert to `safetensors` on the fly if the call is from testing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T11:11:06
| 2025-10-01T15:46:22
| 2025-10-01T15:46:21
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41194",
"html_url": "https://github.com/huggingface/transformers/pull/41194",
"diff_url": "https://github.com/huggingface/transformers/pull/41194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41194.patch",
"merged_at": "2025-10-01T15:46:21"
}
|
# What does this PR do?
Don't convert to `safetensors` on the fly if the call is from testing
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41194/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41193
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41193/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41193/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41193/events
|
https://github.com/huggingface/transformers/pull/41193
| 3,463,924,125
|
PR_kwDOCUB6oc6rC1zP
| 41,193
|
CI Runners - move amd runners mi355 and 325 to runner group
|
{
"login": "glegendre01",
"id": 115986922,
"node_id": "U_kgDOBunR6g",
"avatar_url": "https://avatars.githubusercontent.com/u/115986922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glegendre01",
"html_url": "https://github.com/glegendre01",
"followers_url": "https://api.github.com/users/glegendre01/followers",
"following_url": "https://api.github.com/users/glegendre01/following{/other_user}",
"gists_url": "https://api.github.com/users/glegendre01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glegendre01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glegendre01/subscriptions",
"organizations_url": "https://api.github.com/users/glegendre01/orgs",
"repos_url": "https://api.github.com/users/glegendre01/repos",
"events_url": "https://api.github.com/users/glegendre01/events{/privacy}",
"received_events_url": "https://api.github.com/users/glegendre01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T08:57:48
| 2025-09-29T09:14:20
| 2025-09-29T09:14:19
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41193",
"html_url": "https://github.com/huggingface/transformers/pull/41193",
"diff_url": "https://github.com/huggingface/transformers/pull/41193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41193.patch",
"merged_at": "2025-09-29T09:14:19"
}
|
Move CI runners 325/355 from scale set targetting to runner group
|
{
"login": "glegendre01",
"id": 115986922,
"node_id": "U_kgDOBunR6g",
"avatar_url": "https://avatars.githubusercontent.com/u/115986922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glegendre01",
"html_url": "https://github.com/glegendre01",
"followers_url": "https://api.github.com/users/glegendre01/followers",
"following_url": "https://api.github.com/users/glegendre01/following{/other_user}",
"gists_url": "https://api.github.com/users/glegendre01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glegendre01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glegendre01/subscriptions",
"organizations_url": "https://api.github.com/users/glegendre01/orgs",
"repos_url": "https://api.github.com/users/glegendre01/repos",
"events_url": "https://api.github.com/users/glegendre01/events{/privacy}",
"received_events_url": "https://api.github.com/users/glegendre01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41193/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41192
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41192/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41192/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41192/events
|
https://github.com/huggingface/transformers/pull/41192
| 3,463,430,665
|
PR_kwDOCUB6oc6rBLTV
| 41,192
|
Fix Qwen3-Omni audio_token_id serialization issue
|
{
"login": "eun2ce",
"id": 40400092,
"node_id": "MDQ6VXNlcjQwNDAwMDky",
"avatar_url": "https://avatars.githubusercontent.com/u/40400092?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eun2ce",
"html_url": "https://github.com/eun2ce",
"followers_url": "https://api.github.com/users/eun2ce/followers",
"following_url": "https://api.github.com/users/eun2ce/following{/other_user}",
"gists_url": "https://api.github.com/users/eun2ce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eun2ce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eun2ce/subscriptions",
"organizations_url": "https://api.github.com/users/eun2ce/orgs",
"repos_url": "https://api.github.com/users/eun2ce/repos",
"events_url": "https://api.github.com/users/eun2ce/events{/privacy}",
"received_events_url": "https://api.github.com/users/eun2ce/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T06:18:31
| 2025-09-30T09:15:56
| 2025-09-30T09:15:56
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41192",
"html_url": "https://github.com/huggingface/transformers/pull/41192",
"diff_url": "https://github.com/huggingface/transformers/pull/41192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41192.patch",
"merged_at": "2025-09-30T09:15:56"
}
|
# What does this PR do?
This PR fixes a critical bug in Qwen3-Omni model configuration where `audio_token_id` was not properly preserved during `save_pretrained`/`from_pretrained` operations, causing incorrect inference results in training scenarios.
## Problem
When using `save_pretrained()` followed by `from_pretrained()` on Qwen3-Omni models, the `audio_token_id` value was being reset to the default value (151646) instead of preserving the original value (e.g., 151675), leading to abnormal audio training/inference results.
## Root Cause
The issue was caused by a mismatch between the `attribute_map` in `Qwen3OmniMoeThinkerConfig` and the actual implementation:
- `attribute_map` was mapping `audio_token_id` → `audio_token_index`
- But the modular implementation deletes `audio_token_index` and uses `audio_token_id` directly
- This caused serialization/deserialization to fail silently
## Solution
Remove the conflicting `attribute_map` from `Qwen3OmniMoeThinkerConfig` to allow `audio_token_id` to be serialized directly, matching the actual implementation pattern used in the modular version.
## Verification
- ✅ Tested with real Qwen3-Omni model: `audio_token_id` now preserves correctly (151675 → 151675)
- ✅ Basic configuration tests pass
- ✅ No breaking changes to existing functionality
Fixes #41191
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue? Yes: #41191
- [x] Did you make sure existing tests pass?
- [x] This is a bug fix that doesn't require documentation updates
## Who can review?
@ArthurZucker @SunMarc (text models and configuration)
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41192/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41191
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41191/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41191/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41191/events
|
https://github.com/huggingface/transformers/issues/41191
| 3,463,240,542
|
I_kwDOCUB6oc7ObNte
| 41,191
|
[BUG] Qwen3-Omni: save_pretrained causes abnormal audio_token_id, leading to incorrect inference (in training scenario)
|
{
"login": "Jintao-Huang",
"id": 45290347,
"node_id": "MDQ6VXNlcjQ1MjkwMzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/45290347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jintao-Huang",
"html_url": "https://github.com/Jintao-Huang",
"followers_url": "https://api.github.com/users/Jintao-Huang/followers",
"following_url": "https://api.github.com/users/Jintao-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/Jintao-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jintao-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jintao-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/Jintao-Huang/orgs",
"repos_url": "https://api.github.com/users/Jintao-Huang/repos",
"events_url": "https://api.github.com/users/Jintao-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jintao-Huang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-29T04:55:17
| 2025-09-30T09:15:57
| 2025-09-30T09:15:57
|
CONTRIBUTOR
| null | null | null | null |
```
MODEL_PATH = "Qwen/Qwen3-Omni-30B-A3B-Instruct"
a = AutoConfig.from_pretrained(MODEL_PATH)
a.save_pretrained('abcd')
b = AutoConfig.from_pretrained('abcd')
print(f'a.thinker_config.audio_token_id: {a.thinker_config.audio_token_id}')
print(f'b.thinker_config.audio_token_id: {b.thinker_config.audio_token_id}')
assert a.thinker_config.audio_token_id == b.thinker_config.audio_token_id
```
<img width="590" height="134" alt="Image" src="https://github.com/user-attachments/assets/dcf4b5d1-9f75-480a-9221-dd380e096d2f" />
https://github.com/huggingface/transformers/blob/071eb5334f5a9ac2c7a13515219be8a272388ec6/src/transformers/models/qwen3_omni_moe/modular_qwen3_omni_moe.py#L272
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41191/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41190
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41190/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41190/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41190/events
|
https://github.com/huggingface/transformers/issues/41190
| 3,462,714,127
|
I_kwDOCUB6oc7OZNMP
| 41,190
|
Regression in SmolVLM results in different vision embeddings
|
{
"login": "yfw",
"id": 485138,
"node_id": "MDQ6VXNlcjQ4NTEzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/485138?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yfw",
"html_url": "https://github.com/yfw",
"followers_url": "https://api.github.com/users/yfw/followers",
"following_url": "https://api.github.com/users/yfw/following{/other_user}",
"gists_url": "https://api.github.com/users/yfw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yfw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yfw/subscriptions",
"organizations_url": "https://api.github.com/users/yfw/orgs",
"repos_url": "https://api.github.com/users/yfw/repos",
"events_url": "https://api.github.com/users/yfw/events{/privacy}",
"received_events_url": "https://api.github.com/users/yfw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-28T23:41:19
| 2025-10-07T11:43:25
| 2025-10-07T11:43:25
|
NONE
| null | null | null | null |
### System Info
This is an issue since v4.55.1
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Please run the following reproducer script: `uv run reproducer.py`
```python
#!/usr/bin/env -S uv run --script
# /// script
# dependencies = [
# "num2words",
# "pillow",
# "torch",
# "torchvision",
# "transformers==4.54.1",
# ]
# ///
import transformers
print(transformers.__version__)
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
model_path = "HuggingFaceTB/SmolVLM2-2.2B-Instruct"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForImageTextToText.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
).to("cuda")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Can you describe this image?"},
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device, dtype=torch.bfloat16)
patch_size = model.model.vision_model.patch_size
pixel_values = inputs['pixel_values']
batch_size, num_images, num_channels, height, width = pixel_values.shape
pixel_values = pixel_values.view(batch_size * num_images, *pixel_values.shape[2:])
patch_attention_mask = torch.ones(
(
batch_size,
pixel_values.size(2) // patch_size,
pixel_values.size(3) // patch_size,
)
)
patch_attention_mask = patch_attention_mask.to(dtype=torch.bool, device=pixel_values.device)
embeddings_model = model.model.vision_model.embeddings
embeddings = embeddings_model(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask)
print(embeddings[0][-1][:10])
```
For transformers==4.54.1, this will produce:
```
tensor([-0.5938, -0.5117, -0.7305, -1.1797, -0.5977, -0.7305, -0.7070, -0.6484,
-0.5547, -0.6758], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
```
Changing the `dependencies` to `"transformers==4.55.1"` and rerunning the script will produce:
```
tensor([-0.5977, -0.5742, -0.6875, -1.0625, -0.5977, -0.8398, -0.7031, -0.6406,
-0.5078, -0.6445], device='cuda:0', dtype=torch.bfloat16,
grad_fn=<SliceBackward0>)
```
The issue is the logic to calculate position_ids changed slightly from 4.54.1 (https://github.com/huggingface/transformers/blob/4.54.1/src/transformers/models/smolvlm/modeling_smolvlm.py#L131-L156) to 4.55.1 (https://github.com/huggingface/transformers/blob/v4.55.1/src/transformers/models/smolvlm/modeling_smolvlm.py#L131-L162). This results in different `position_embeddings` which affect the final `embeddings`.
### Expected behavior
We expect the `embeddings` to be the same between versions. Furthermore, the current version differs from vllm's implementation (which is aligned with 4.54.1). This causes issues in RL when we expect the inference and training implementations to be aligned.
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41190/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41189
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41189/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41189/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41189/events
|
https://github.com/huggingface/transformers/issues/41189
| 3,462,619,657
|
I_kwDOCUB6oc7OY2IJ
| 41,189
|
Documentation says manual `tp_plan` can be set but it can't be set.
|
{
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-28T22:09:51
| 2025-09-30T16:29:55
| 2025-09-30T16:29:55
|
CONTRIBUTOR
| null | null | null | null |
### System Info
- `transformers` version: 4.57.0.dev0
- platform: linux-5.10.102.1-microsoft-standard-wsl2-x86_64-with-glibc2.31
- python version: 3.12.11
- huggingface_hub version: 1.0.0.rc1
- safetensors version: 0.5.3
- accelerate version: 1.9.0
- accelerate config: not found
- deepspeed version: 0.17.6
- pytorch version (accelerator?): 2.8.0+cu128 (cuda)
- using distributed or parallel set-up in script?: yes
- using gpu in script?: yes
- gpu type: nvidia geforce rtx 2070 super
### Who can help?
- distributed: @arthurzucker
- documentation: @stevhliu
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Documentation says manual `tp_plan` can be set but if I set it, it raises error.
https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_multi?sharding=manual+plan#partitioning-a-model
```py
from transformers import AutoModelForCausalLM
model_id = "Qwen/Qwen2-1.5B-Instruct"
tp_plan = {
"model.layers.*.self_attn.q_proj": "colwise",
"model.layers.*.self_attn.k_proj": "colwise",
"model.layers.*.self_attn.v_proj": "colwise",
"model.layers.*.self_attn.o_proj": "rowwise",
}
model = AutoModelForCausalLM.from_pretrained(model_id, tp_plan=tp_plan)
```
```sh
torchrun --nproc-per-node=1 test.py
```
Error:
```sh
Traceback (most recent call last):
File "/home/shutotakahashi/projects/transformers-uv/transformers/test.py", line 11, in <module>
model = AutoModelForCausalLM.from_pretrained(model_id, tp_plan=tp_plan)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shutotakahashi/projects/transformers-uv/transformers/src/transformers/models/auto/auto_factory.py", line 389, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/shutotakahashi/projects/transformers-uv/transformers/src/transformers/modeling_utils.py", line 281, in _wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/shutotakahashi/projects/transformers-uv/transformers/src/transformers/modeling_utils.py", line 4615, in from_pretrained
raise ValueError(f"tp_plan supports 'auto' only for now but got {tp_plan}.")
ValueError: tp_plan supports 'auto' only for now but got {'model.layers.*.self_attn.q_proj': 'colwise', 'model.layers.*.self_attn.k_proj': 'colwise', 'model.layers.*.self_attn.v_proj': 'colwise', 'model.layers.*.self_attn.o_proj': 'rowwise'}.
```
### Expected behavior
[Looking at the source code](https://github.com/huggingface/transformers/blob/071eb5334f5a9ac2c7a13515219be8a272388ec6/src/transformers/modeling_utils.py#L4613), it appears that manual `tp_plan` is not supported yet, but the documentation says it is supported. I expect either:
- The documentation should indicate that manual `tp_plan` is not yet supported, or the manual `tp_plan` sections should be removed from the documentation
- If manual `tp_plan` is intended to be supported, it should work without error.
The error `ValueError(f"tp_plan supports 'auto' only for now but got {tp_plan}.")` was introduced in #34184, which was merged about 1 year ago.
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41189/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41189/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41188
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41188/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41188/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41188/events
|
https://github.com/huggingface/transformers/pull/41188
| 3,462,348,876
|
PR_kwDOCUB6oc6q9jlE
| 41,188
|
feat: support cwm modeling
|
{
"login": "xgal",
"id": 6907562,
"node_id": "MDQ6VXNlcjY5MDc1NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6907562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xgal",
"html_url": "https://github.com/xgal",
"followers_url": "https://api.github.com/users/xgal/followers",
"following_url": "https://api.github.com/users/xgal/following{/other_user}",
"gists_url": "https://api.github.com/users/xgal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xgal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xgal/subscriptions",
"organizations_url": "https://api.github.com/users/xgal/orgs",
"repos_url": "https://api.github.com/users/xgal/repos",
"events_url": "https://api.github.com/users/xgal/events{/privacy}",
"received_events_url": "https://api.github.com/users/xgal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-28T18:27:00
| 2025-09-30T10:37:17
| 2025-09-30T10:37:16
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41188",
"html_url": "https://github.com/huggingface/transformers/pull/41188",
"diff_url": "https://github.com/huggingface/transformers/pull/41188.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41188.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Support CWM modeling https://huggingface.co/facebook/cwm/tree/main
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "xgal",
"id": 6907562,
"node_id": "MDQ6VXNlcjY5MDc1NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6907562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xgal",
"html_url": "https://github.com/xgal",
"followers_url": "https://api.github.com/users/xgal/followers",
"following_url": "https://api.github.com/users/xgal/following{/other_user}",
"gists_url": "https://api.github.com/users/xgal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xgal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xgal/subscriptions",
"organizations_url": "https://api.github.com/users/xgal/orgs",
"repos_url": "https://api.github.com/users/xgal/repos",
"events_url": "https://api.github.com/users/xgal/events{/privacy}",
"received_events_url": "https://api.github.com/users/xgal/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41188/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41187
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41187/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41187/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41187/events
|
https://github.com/huggingface/transformers/issues/41187
| 3,462,196,052
|
I_kwDOCUB6oc7OXOtU
| 41,187
|
There is no license in the readme
|
{
"login": "vinitjain2005",
"id": 161056675,
"node_id": "U_kgDOCZmHow",
"avatar_url": "https://avatars.githubusercontent.com/u/161056675?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vinitjain2005",
"html_url": "https://github.com/vinitjain2005",
"followers_url": "https://api.github.com/users/vinitjain2005/followers",
"following_url": "https://api.github.com/users/vinitjain2005/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitjain2005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vinitjain2005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitjain2005/subscriptions",
"organizations_url": "https://api.github.com/users/vinitjain2005/orgs",
"repos_url": "https://api.github.com/users/vinitjain2005/repos",
"events_url": "https://api.github.com/users/vinitjain2005/events{/privacy}",
"received_events_url": "https://api.github.com/users/vinitjain2005/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-28T16:35:01
| 2025-09-29T12:29:08
| 2025-09-29T12:29:08
|
NONE
| null | null | null | null |
I will add the license in the readme file. I am a hacktoberfest contributer please assign these task to me.

|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41187/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41186
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41186/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41186/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41186/events
|
https://github.com/huggingface/transformers/issues/41186
| 3,461,018,646
|
I_kwDOCUB6oc7OSvQW
| 41,186
|
Qwen2.5-VL restore tensor multi-image form
|
{
"login": "NiFangBaAGe",
"id": 38719050,
"node_id": "MDQ6VXNlcjM4NzE5MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/38719050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NiFangBaAGe",
"html_url": "https://github.com/NiFangBaAGe",
"followers_url": "https://api.github.com/users/NiFangBaAGe/followers",
"following_url": "https://api.github.com/users/NiFangBaAGe/following{/other_user}",
"gists_url": "https://api.github.com/users/NiFangBaAGe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NiFangBaAGe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NiFangBaAGe/subscriptions",
"organizations_url": "https://api.github.com/users/NiFangBaAGe/orgs",
"repos_url": "https://api.github.com/users/NiFangBaAGe/repos",
"events_url": "https://api.github.com/users/NiFangBaAGe/events{/privacy}",
"received_events_url": "https://api.github.com/users/NiFangBaAGe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-28T03:36:24
| 2025-10-28T08:02:52
| null |
NONE
| null | null | null | null |
Hello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,
```
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
```
but I want to process each image individually, such as performing pooling on each image. I found that when I attempt operations like
```
image_embeds.view(n_img, image_embeds.shape[0]//n_img, -1)
```
I cannot correctly restore the multi-image format. Could you please advise on how to handle this?
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41186/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41185
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41185/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41185/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41185/events
|
https://github.com/huggingface/transformers/issues/41185
| 3,460,517,635
|
I_kwDOCUB6oc7OQ08D
| 41,185
|
llama3-8b
|
{
"login": "52566rz",
"id": 117735823,
"node_id": "U_kgDOBwSBjw",
"avatar_url": "https://avatars.githubusercontent.com/u/117735823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/52566rz",
"html_url": "https://github.com/52566rz",
"followers_url": "https://api.github.com/users/52566rz/followers",
"following_url": "https://api.github.com/users/52566rz/following{/other_user}",
"gists_url": "https://api.github.com/users/52566rz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/52566rz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/52566rz/subscriptions",
"organizations_url": "https://api.github.com/users/52566rz/orgs",
"repos_url": "https://api.github.com/users/52566rz/repos",
"events_url": "https://api.github.com/users/52566rz/events{/privacy}",
"received_events_url": "https://api.github.com/users/52566rz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-27T16:46:09
| 2025-09-29T12:27:25
| 2025-09-29T12:27:25
|
NONE
| null | null | null | null |
### System Info
When running
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When running
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
### Expected behavior
When running
Setting `pad_token_id` to `eos_token_id`:None for open-end generation.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41185/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41184
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41184/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41184/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41184/events
|
https://github.com/huggingface/transformers/issues/41184
| 3,460,166,303
|
I_kwDOCUB6oc7OPfKf
| 41,184
|
Continuous Batching sliding window attention mask is wrong
|
{
"login": "NixGD",
"id": 8730377,
"node_id": "MDQ6VXNlcjg3MzAzNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8730377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NixGD",
"html_url": "https://github.com/NixGD",
"followers_url": "https://api.github.com/users/NixGD/followers",
"following_url": "https://api.github.com/users/NixGD/following{/other_user}",
"gists_url": "https://api.github.com/users/NixGD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NixGD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NixGD/subscriptions",
"organizations_url": "https://api.github.com/users/NixGD/orgs",
"repos_url": "https://api.github.com/users/NixGD/repos",
"events_url": "https://api.github.com/users/NixGD/events{/privacy}",
"received_events_url": "https://api.github.com/users/NixGD/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-27T11:17:12
| 2025-10-13T15:12:45
| 2025-10-13T15:12:45
|
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 1.0.0.rc1
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: no
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@remi-or
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here's a minimal example showing that the sliding attention leaves more key-value pairs able to be attended to.
```
import torch
from transformers.generation.continuous_batching.continuous_api import build_attention_mask
seq_len = 16
sliding_window = 4
cumulative_seqlens_q = torch.tensor([0, seq_len])
cumulative_seqlens_k = torch.tensor([0, seq_len])
# initialize masks to all ones -- attention is allowed everywhere
window_mask = torch.ones((1, 1, seq_len, seq_len), dtype=torch.float32)
full_mask = torch.ones((1, 1, seq_len, seq_len), dtype=torch.float32)
# build_attention_mask converts this to 0 & -inf
build_attention_mask(window_mask, cumulative_seqlens_q, cumulative_seqlens_k, sliding_window=sliding_window)
build_attention_mask(full_mask, cumulative_seqlens_q, cumulative_seqlens_k, sliding_window=1)
# entries that are still 0 allow the model to attend to that query-key pair
print("Key/Query pairs the model can attend to (full mask):", (full_mask == 0).sum().item())
print("Key/Query pairs the model can attend to (window mask):", (window_mask == 0).sum().item())
```
This outputs
```
Key/Query pairs the model can attend to (full mask): 136
Key/Query pairs the model can attend to (window mask): 202
```
### Expected behavior
Using Continuous Batching with `gpt-oss` gives meaningless results. I believe this is the reason.
My understanding is the true cause is [this line](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/continuous_batching/continuous_api.py#L67C28-L67C32) sets elements of the mask from values of -inf to be 0, which means they are allowed to be attended to. Instead the window-attention should be _more_ restrictive.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41184/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41183
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41183/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41183/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41183/events
|
https://github.com/huggingface/transformers/pull/41183
| 3,458,364,379
|
PR_kwDOCUB6oc6qwggh
| 41,183
|
Trainer: Pass `num_items_in_batch` to `compute_loss` in `prediction_step`
|
{
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T16:52:02
| 2025-09-30T09:45:17
| 2025-09-30T09:45:17
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41183",
"html_url": "https://github.com/huggingface/transformers/pull/41183",
"diff_url": "https://github.com/huggingface/transformers/pull/41183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41183.patch",
"merged_at": "2025-09-30T09:45:17"
}
|
# What does this PR do?
Ensures that `num_items_in_batch` is passed to the `compute_loss` function in the `prediction_step` to ensure that loss is calculated the same way both at train and eval time.
<!-- Remove if not applicable -->
Fixes #41108
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41183/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41182
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41182/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41182/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41182/events
|
https://github.com/huggingface/transformers/pull/41182
| 3,457,937,056
|
PR_kwDOCUB6oc6qvDwp
| 41,182
|
Update GLM-4.1V MMRope implementation
|
{
"login": "zRzRzRzRzRzRzR",
"id": 93239683,
"node_id": "U_kgDOBY65gw",
"avatar_url": "https://avatars.githubusercontent.com/u/93239683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zRzRzRzRzRzRzR",
"html_url": "https://github.com/zRzRzRzRzRzRzR",
"followers_url": "https://api.github.com/users/zRzRzRzRzRzRzR/followers",
"following_url": "https://api.github.com/users/zRzRzRzRzRzRzR/following{/other_user}",
"gists_url": "https://api.github.com/users/zRzRzRzRzRzRzR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zRzRzRzRzRzRzR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zRzRzRzRzRzRzR/subscriptions",
"organizations_url": "https://api.github.com/users/zRzRzRzRzRzRzR/orgs",
"repos_url": "https://api.github.com/users/zRzRzRzRzRzRzR/repos",
"events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/events{/privacy}",
"received_events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T14:36:55
| 2025-10-09T10:15:48
| 2025-10-09T10:15:47
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41182",
"html_url": "https://github.com/huggingface/transformers/pull/41182",
"diff_url": "https://github.com/huggingface/transformers/pull/41182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41182.patch",
"merged_at": "2025-10-09T10:15:47"
}
|
# What does this PR do?
This feature aims to enable the GLM-4.1V model to support 4D input mrope processing. It supports the same input format as Qwen3 in verl.
verl pr: https://github.com/volcengine/verl/pull/3291
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41182/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41181
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41181/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41181/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41181/events
|
https://github.com/huggingface/transformers/pull/41181
| 3,457,874,197
|
PR_kwDOCUB6oc6qu2NC
| 41,181
|
download and use HF Hub Cache
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T14:17:56
| 2025-10-03T09:11:39
| 2025-10-03T09:11:37
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41181",
"html_url": "https://github.com/huggingface/transformers/pull/41181",
"diff_url": "https://github.com/huggingface/transformers/pull/41181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41181.patch",
"merged_at": "2025-10-03T09:11:37"
}
|
# What does this PR do?
I did the following
- run the tests
- collect the calls to `from_pretrained`
- run those `from_pretrained` calls within our CircleCI docker image build
- upload the `~/.cache/huggingface/hub` to a hub repository `hf-internal-testing/hf_hub_cache`
- during CircleCI runtime, download, unzip to the cache directory
I will work on putting all things together so it's easier to do in the future, but this PR could reduce the number of calls to Hub already.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41181/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41180
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41180/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41180/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41180/events
|
https://github.com/huggingface/transformers/issues/41180
| 3,457,723,742
|
I_kwDOCUB6oc7OGK1e
| 41,180
|
Qwen2.5-VL-7B-Instruct Accuracy Regression Still Persists in v4.56.2
|
{
"login": "rahul-tuli",
"id": 25380596,
"node_id": "MDQ6VXNlcjI1MzgwNTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/25380596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahul-tuli",
"html_url": "https://github.com/rahul-tuli",
"followers_url": "https://api.github.com/users/rahul-tuli/followers",
"following_url": "https://api.github.com/users/rahul-tuli/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-tuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahul-tuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-tuli/subscriptions",
"organizations_url": "https://api.github.com/users/rahul-tuli/orgs",
"repos_url": "https://api.github.com/users/rahul-tuli/repos",
"events_url": "https://api.github.com/users/rahul-tuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahul-tuli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-26T13:38:20
| 2025-10-27T09:21:07
| null |
CONTRIBUTOR
| null | null | null | null |
## Summary
Despite issue #40136 being marked as resolved, the significant accuracy regression in `Qwen2.5-VL-7B-Instruct` model persists in the latest Transformers version `4.56.2`. Our testing shows a more significant drop ~26% relative accuracy drop on MMMU Literature benchmark that was reported in the original issue.
## Problem Description
The `Qwen2.5-VL-7B-Instruct model shows` inconsistent and degraded performance on multimodal evaluation benchmarks when using recent Transformers versions (4.54.0+), despite PR #40490 claiming to fix this issue.
### Observed Results
```bash
| Transformers Version | MMMU Literature Accuracy | Relative Change |
|---------------------|--------------------------|-----------------|
| 4.53.3 | 93.33% ± 4.63% | Baseline |
| 4.56.2 | 70.00% ± 8.51% | -25.0% relative |
```
## Reproduction Steps
### Environment Setup
```bash
uv pip install lm-eval torch torchvision accelerate Pillow transformers==4.56.2
```
### Evaluation Command
```bash
lm_eval \
--model hf-multimodal \
--model_args "pretrained=Qwen/Qwen2.5-VL-7B-Instruct,dtype=bfloat16,add_bos_token=True,convert_img_format=True" \
--tasks mmmu_val_literature \
--num_fewshot 0 \
--batch_size 8 \
--verbosity INFO
```
## Impact
This regression affects:
- Production systems using Qwen2.5-VL for visual question answering
- Research benchmarks and evaluations
- Any multimodal applications relying on Qwen2.5-VL models
The ~25% relative accuracy drop represents a significant degradation that makes the model substantially less reliable for downstream applications.
## Expected Behavior
The model should maintain consistent accuracy across Transformers versions, as observed with v4.53.3 (93.33% accuracy).
## Actual Behavior
The model shows degraded performance in v4.56.2 (70.00% accuracy), indicating the original issue was not fully resolved.
## Additional Context
- **Original Issue**: #40136 (marked as resolved)
- **Claimed Fix**: PR #40490
- **Hardware**: NVIDIA GPU with CUDA support H100
- **Framework**: lm-eval with hf-multimodal backend
- **Model**: Qwen/Qwen2.5-VL-7B-Instruct
- **Task**: MMMU Literature benchmark
- **Evaluation Consistency**: Fixed random seeds used across all tests
## Request
Please reopen investigation into this regression as the issue appears to persist despite the claimed resolution. The consistent ~25% accuracy drop indicates a systematic issue that needs addressing.
---
**Test Environment Details:**
- Python 3.12
- CUDA-enabled environment
- Multiple test runs with identical configurations
- Fixed random seeds for reproducibility
## Raw Test Logs
<details>
<summary>Transformers 4.56.2 Results</summary>
```
hf-multimodal (pretrained=Qwen/Qwen2.5-VL-7B-Instruct,dtype=bfloat16,add_bos_token=True,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 8
| Tasks |Version|Filter|n-shot|Metric| |Value| |Stderr|
|----------|------:|------|-----:|------|---|----:|---|-----:|
|Literature| 0|none | 0|acc |↑ | 0.7|± |0.0851|
```
</details>
<details>
<summary>Transformers 4.53.3 Results</summary>
```
hf-multimodal (pretrained=Qwen/Qwen2.5-VL-7B-Instruct,dtype=bfloat16,add_bos_token=True,convert_img_format=True), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 8
| Tasks |Version|Filter|n-shot|Metric| |Value | |Stderr|
|----------|------:|------|-----:|------|---|-----:|---|-----:|
|Literature| 0|none | 0|acc |↑ |0.9333|± |0.0463|
```
</details>
### System Info
- `transformers` version: 4.56.2
- Platform: Linux-5.14.0-611.el9.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.11
- Huggingface_hub version: 0.35.1
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA H100 80GB HBM3
### Environment Setup
```bash
uv pip install lm-eval torch torchvision accelerate Pillow transformers==4.56.2
```
### Evaluation Command
```bash
lm_eval \
--model hf-multimodal \
--model_args "pretrained=Qwen/Qwen2.5-VL-7B-Instruct,dtype=bfloat16,add_bos_token=True,convert_img_format=True" \
--tasks mmmu_val_literature \
--num_fewshot 0 \
--batch_size 8 \
--verbosity INFO
```
### Expected behavior
The model should maintain consistent accuracy across Transformers versions, as observed with v4.53.3 (93.33% accuracy).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41180/reactions",
"total_count": 5,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 4,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41180/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41179
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41179/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41179/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41179/events
|
https://github.com/huggingface/transformers/issues/41179
| 3,457,657,714
|
I_kwDOCUB6oc7OF6ty
| 41,179
|
ImportError `flash_attn_3` while setting `attn_implementation="flash_attention_3"` on hopper GPU
|
{
"login": "allenphilipj",
"id": 213606200,
"node_id": "U_kgDODLtfOA",
"avatar_url": "https://avatars.githubusercontent.com/u/213606200?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allenphilipj",
"html_url": "https://github.com/allenphilipj",
"followers_url": "https://api.github.com/users/allenphilipj/followers",
"following_url": "https://api.github.com/users/allenphilipj/following{/other_user}",
"gists_url": "https://api.github.com/users/allenphilipj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allenphilipj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allenphilipj/subscriptions",
"organizations_url": "https://api.github.com/users/allenphilipj/orgs",
"repos_url": "https://api.github.com/users/allenphilipj/repos",
"events_url": "https://api.github.com/users/allenphilipj/events{/privacy}",
"received_events_url": "https://api.github.com/users/allenphilipj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-26T13:22:39
| 2025-10-27T08:02:57
| null |
NONE
| null | null | null | null |
### System Info
```bash
Package Version
---------------------------------- ----------------------------
accelerate 1.10.0
datasets 4.0.0
flash-attn-3 3.0.0b1+cu.12.8.torch.2.7
huggingface-hub 0.34.4
transformers 4.55.1
```
The check [here](https://github.com/huggingface/transformers/blob/50d2448a1a7b75354c3d0ca879afd124abd244ac/src/transformers/modeling_utils.py#L2468) is causing failures while setting `attn_implementation="flash_attention_3"` while loading the model.
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm trying to use FA3 for my train task by loading the model as shown below:
```python
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_3",
)
```
I have FA3 installed:
```bash
$ uv pip list | grep flash
flash-attn-3 3.0.0b1+cu.12.8.torch.2.7
```
I noticed that the actual FA3 implementation is mapped [here](https://github.com/huggingface/transformers/blob/53838edde77cb10f3a360150aa85a457637e9ac3/src/transformers/modeling_flash_attention_utils.py#L92C46-L92C61) and I'm able to confirm that `from flash_attn_interface import flash_attn_func, flash_attn_varlen_func` works on my venv. So I'm unable to understand the validation check for `flash_attn_3` module.
Interestingly, if I hack around the validation check for `flash_attn_3` module, the profiler shows it's correctly using the sm90 fa3 kernels.
### Expected behavior
Setting `attn_implementation="flash_attention_3"` should not fail due to `flash_attn_3` import error.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41179/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41179/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41178
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41178/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41178/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41178/events
|
https://github.com/huggingface/transformers/issues/41178
| 3,457,279,563
|
I_kwDOCUB6oc7OEeZL
| 41,178
|
Memory leak when using openai/clip-vit-base-patch32
|
{
"login": "Adefey",
"id": 63973081,
"node_id": "MDQ6VXNlcjYzOTczMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/63973081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Adefey",
"html_url": "https://github.com/Adefey",
"followers_url": "https://api.github.com/users/Adefey/followers",
"following_url": "https://api.github.com/users/Adefey/following{/other_user}",
"gists_url": "https://api.github.com/users/Adefey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Adefey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Adefey/subscriptions",
"organizations_url": "https://api.github.com/users/Adefey/orgs",
"repos_url": "https://api.github.com/users/Adefey/repos",
"events_url": "https://api.github.com/users/Adefey/events{/privacy}",
"received_events_url": "https://api.github.com/users/Adefey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-26T12:01:55
| 2025-10-27T13:34:17
| null |
NONE
| null | null | null | null |
### System Info
Model runs on CPU+RAM. Ryzen 6800H, 14 GB DDR5, Host system: Fedora 41, Docker base image: python:3.13.7
FastAPI microservice is deployed in docker with these requirements:
fastapi==0.117.1
huggingface-hub==0.35.1
numpy==2.3.3
pillow==11.3.0
pydantic==2.11.9
python-multipart==0.0.20
requests==2.32.4
sentencepiece==0.2.1
torch==2.8.0
torchvision==0.23.0
transformers==4.56.2
uvicorn==0.37.0
transformers env:
embedding-service | - `transformers` version: 4.56.2
embedding-service | - Platform: Linux-6.16.7-100.fc41.x86_64-x86_64-with-glibc2.41
embedding-service | - Python version: 3.13.7
embedding-service | - Huggingface_hub version: 0.35.1
embedding-service | - Safetensors version: 0.6.2
embedding-service | - Accelerate version: not installed
embedding-service | - Accelerate config: not found
embedding-service | - DeepSpeed version: not installed
embedding-service | - PyTorch version (accelerator?): 2.8.0+cu128 (NA)
embedding-service | - Tensorflow version (GPU?): not installed (NA)
embedding-service | - Flax version (CPU?/GPU?/TPU?): not installed (NA)
embedding-service | - Jax version: not installed
embedding-service | - JaxLib version: not installed
embedding-service | - Using distributed or parallel set-up in script?: (I'm not sure tbh...)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use CPU+RAM
2. UPD: Add at least 1000 images to `data` folder
3. Git clone this repository: https://github.com/Adefey/search_dir, checkout to this commit: `d8a40baa434c4b00ad04dada9d7221edb111f4aa`
4. Run: docker compose up --build
5. Call API: POST http://localhost:8003/api/v1/start_discovery
6. Monitor logs, depending on available system memory embedding-service may get OOM (embedding-service exited with code 137) after some batches (on my system with 14 GB ram it's 10 batches)
But actually there are only 3 methods (`__init__`, `_encode`, `encode_images`) that may be relevant. Repeated calls of `encode_images` result in OOM. I'll show relevant methods here:
```
def __init__(self):
self.model_checkpoint = "openai/clip-vit-base-patch32"
os.system("transformers env")
self.device = "cuda" if torch.cuda.is_available() else "cpu"
logger.info(f"Start setting up model {self.model_checkpoint} on {self.device}")
self.model = AutoModel.from_pretrained(self.model_checkpoint).to(self.device)
self.processor = AutoProcessor.from_pretrained(self.model_checkpoint, use_fast=False)
logger.info(f"Finished setting up model {self.model_checkpoint} on {self.device}")
```
```
def _encode(self, inputs: dict) -> list[float]:
inputs = {k: v.to(self.device) for k, v in inputs.items()}
if "pixel_values" in inputs:
features = self.model.get_image_features(**inputs)
else:
features = self.model.get_text_features(**inputs)
result = features.cpu().detach().numpy().tolist()
del inputs
del features
if self.device == "cuda":
torch.cuda.empty_cache()
# ??????????
# trim_memory()
return result
```
```
def encode_images(self, images: list[bytes]) -> list[list[float]]:
"""
Process images into embeddings
"""
logger.info(f"Start encoding images")
image_list = [Image.open(io.BytesIO(image)) for image in images]
with torch.inference_mode():
inputs = self.processor(
images=image_list,
return_tensors="pt",
padding=True,
)
result = self._encode(inputs)
for image in image_list:
image.close()
logger.info(f"Finished encoding images")
return result
```
Also, there is a working fix: calling `trim_memory()` after each model call:
```
def trim_memory():
libc = ctypes.CDLL("libc.so.6")
return libc.malloc_trim(0)
```
But I think this is a workaround and transformers library should manage resources correctly on its own.
### Expected behavior
Usecase: microservice with Model gets batches of 30 images to calculate embeddings, then it gets another batch. After 10 batches service is killed because of OOM. Manually monitoring memory in htop - usage increases with every batch by 600...800 MB. Expected behavior - constant memory usage for at every time during batch processing.
I suppose there is a memory leak or memory fragmentation issue where new memory keeps being allocated and not reused.
UPD: exactly same OOM issue happens with `google/siglip2-base-patch16-256` and again `malloc_trim` workaround works
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41178/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41177
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41177/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41177/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41177/events
|
https://github.com/huggingface/transformers/pull/41177
| 3,457,194,782
|
PR_kwDOCUB6oc6qse_n
| 41,177
|
Fix Latex typesetting in documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T11:39:40
| 2025-10-10T16:12:00
| 2025-10-10T15:54:28
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41177",
"html_url": "https://github.com/huggingface/transformers/pull/41177",
"diff_url": "https://github.com/huggingface/transformers/pull/41177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41177.patch",
"merged_at": "2025-10-10T15:54:28"
}
|
# What does this PR do?
As the title says.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41177/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41176
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41176/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41176/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41176/events
|
https://github.com/huggingface/transformers/pull/41176
| 3,456,861,283
|
PR_kwDOCUB6oc6qrZWt
| 41,176
|
Optimize rope_deltas propagation logic in Qwen2.5-VL
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T10:18:15
| 2025-10-23T11:56:30
| 2025-10-23T11:56:29
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41176",
"html_url": "https://github.com/huggingface/transformers/pull/41176",
"diff_url": "https://github.com/huggingface/transformers/pull/41176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41176.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes the propagation of `rope_deltas` in the Qwen2.5-VL model during generation.
Currently, the `forward()` method of `Qwen2_5_VLForConditionalGeneration` accepts `rope_deltas` as an argument, but the value is never passed to the underlying `Qwen2_5_VLModel`. As a result, users providing `rope_deltas` directly to `forward()` would see no effect.
**Modifications**
1. `Qwen2_5_VLForConditionalGeneration.forward()` now passes `rope_deltas` to `Qwen2_5_VLModel.forward()`.
2. `Qwen2_5_VLModel.forward()` now accepts `rope_deltas` and updates its internal state accordingly.
3. Updates `prepare_inputs_for_generation()` to store calculated rope_deltas
in model_inputs, aligning its handling with position_ids.
**Impact**
- Users can now explicitly provide `rope_deltas` during generation and have them correctly applied.
- Ensures consistency between prefill and decoding phases in multi-modal generation.
- Aligns the API behavior with documentation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41176/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41175
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41175/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41175/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41175/events
|
https://github.com/huggingface/transformers/pull/41175
| 3,456,596,957
|
PR_kwDOCUB6oc6qqi57
| 41,175
|
Bump hfh prerelease version
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T09:25:50
| 2025-09-29T14:30:11
| 2025-09-29T14:28:37
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41175",
"html_url": "https://github.com/huggingface/transformers/pull/41175",
"diff_url": "https://github.com/huggingface/transformers/pull/41175.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41175.patch",
"merged_at": "2025-09-29T14:28:37"
}
|
From `1.0.0.rc1` to `1.0.0.rc2` (minor fix in `@strict` dataclass so shouldn't change anything).
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41175/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41174
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41174/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41174/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41174/events
|
https://github.com/huggingface/transformers/pull/41174
| 3,456,512,691
|
PR_kwDOCUB6oc6qqRFK
| 41,174
|
🚨 [v5] Delete feature extractors used for vision
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T09:08:32
| 2025-10-01T11:20:58
| 2025-10-01T11:20:58
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41174",
"html_url": "https://github.com/huggingface/transformers/pull/41174",
"diff_url": "https://github.com/huggingface/transformers/pull/41174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41174.patch",
"merged_at": "2025-10-01T11:20:58"
}
|
# What does this PR do?
As per title, let's clean up for v5
These were supposed to be deleted anyway and we had the warning logged for a long time, even before I joined. Feature Extractors are now reserved for audio models only
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41174/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41174/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41173
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41173/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41173/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41173/events
|
https://github.com/huggingface/transformers/pull/41173
| 3,456,370,268
|
PR_kwDOCUB6oc6qpxun
| 41,173
|
Rope for Qwen2--5-vl
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T08:32:24
| 2025-10-06T08:56:30
| 2025-10-06T08:56:29
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41173",
"html_url": "https://github.com/huggingface/transformers/pull/41173",
"diff_url": "https://github.com/huggingface/transformers/pull/41173.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41173.patch",
"merged_at": "2025-10-06T08:56:29"
}
|
# What does this PR do?
Attempt to fix https://github.com/huggingface/transformers/issues/41093. I believe that the `is_prefill()` logic had edge cases which were caught in the linked issue. Let's remove it since the position ids are also prepared in `prepare_inputs_for_generation`.
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41173/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41172
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41172/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41172/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41172/events
|
https://github.com/huggingface/transformers/pull/41172
| 3,455,499,080
|
PR_kwDOCUB6oc6qm17K
| 41,172
|
Fix typsetting and content of llm_tutorial_optimization.md
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-26T01:43:48
| 2025-10-14T23:58:03
| 2025-10-14T15:40:26
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41172",
"html_url": "https://github.com/huggingface/transformers/pull/41172",
"diff_url": "https://github.com/huggingface/transformers/pull/41172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41172.patch",
"merged_at": "2025-10-14T15:40:26"
}
|
# What does this PR do?
Fix latex, markdown `*` escaping and white space. Some errors in the content are also fixed.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41172/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41171
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41171/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41171/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41171/events
|
https://github.com/huggingface/transformers/issues/41171
| 3,455,296,499
|
I_kwDOCUB6oc7N86Pz
| 41,171
|
LFM2-VL PyTorch
|
{
"login": "yukiarimo",
"id": 67983369,
"node_id": "MDQ6VXNlcjY3OTgzMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/67983369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiarimo",
"html_url": "https://github.com/yukiarimo",
"followers_url": "https://api.github.com/users/yukiarimo/followers",
"following_url": "https://api.github.com/users/yukiarimo/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiarimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiarimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiarimo/subscriptions",
"organizations_url": "https://api.github.com/users/yukiarimo/orgs",
"repos_url": "https://api.github.com/users/yukiarimo/repos",
"events_url": "https://api.github.com/users/yukiarimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiarimo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T23:08:30
| 2025-09-27T18:00:16
| 2025-09-27T18:00:16
|
NONE
| null | null | null | null |
Hello!
Can you please share the full PyTorch implementation of the LFM2-VL model so it will be possible to not rely on transformers (and also would be helpful for building custom models)?
Thanks!
|
{
"login": "yukiarimo",
"id": 67983369,
"node_id": "MDQ6VXNlcjY3OTgzMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/67983369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiarimo",
"html_url": "https://github.com/yukiarimo",
"followers_url": "https://api.github.com/users/yukiarimo/followers",
"following_url": "https://api.github.com/users/yukiarimo/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiarimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiarimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiarimo/subscriptions",
"organizations_url": "https://api.github.com/users/yukiarimo/orgs",
"repos_url": "https://api.github.com/users/yukiarimo/repos",
"events_url": "https://api.github.com/users/yukiarimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiarimo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41171/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41170
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41170/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41170/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41170/events
|
https://github.com/huggingface/transformers/pull/41170
| 3,454,676,568
|
PR_kwDOCUB6oc6qkHT-
| 41,170
|
:rotating_light: [`v5`] Remove relative position embeddings (for bert like models)
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T18:28:04
| 2025-10-06T12:21:48
| 2025-10-06T12:21:41
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41170",
"html_url": "https://github.com/huggingface/transformers/pull/41170",
"diff_url": "https://github.com/huggingface/transformers/pull/41170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41170.patch",
"merged_at": "2025-10-06T12:21:41"
}
|
These embedding types are barely used and make the modeling files just more complex without justifying their existence. Position embedding types still exist in a few models; this PR just addresses the `relative_key(_query)` ones.
Some stats:
- None of the slow tests use them except bert
- The respective models in those tests together have less than 2k downloads in the last month
cc @hmellor this should remove any clashes with the kwargs you encountered in vLLM :D
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41170/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41169
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41169/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41169/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41169/events
|
https://github.com/huggingface/transformers/pull/41169
| 3,454,548,829
|
PR_kwDOCUB6oc6qjsrJ
| 41,169
|
Fix TorchDynamo crash in StaticCache by validating offloading and offload_only_non_sliding arguments
|
{
"login": "Flakes342",
"id": 60060568,
"node_id": "MDQ6VXNlcjYwMDYwNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60060568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flakes342",
"html_url": "https://github.com/Flakes342",
"followers_url": "https://api.github.com/users/Flakes342/followers",
"following_url": "https://api.github.com/users/Flakes342/following{/other_user}",
"gists_url": "https://api.github.com/users/Flakes342/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flakes342/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flakes342/subscriptions",
"organizations_url": "https://api.github.com/users/Flakes342/orgs",
"repos_url": "https://api.github.com/users/Flakes342/repos",
"events_url": "https://api.github.com/users/Flakes342/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flakes342/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T17:43:32
| 2025-10-01T12:48:23
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41169",
"html_url": "https://github.com/huggingface/transformers/pull/41169",
"diff_url": "https://github.com/huggingface/transformers/pull/41169.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41169.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes #41164
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @Rocketknight1
# Description
This PR addresses a bug in StaticCache that can lead to torch.compile / TorchDynamo crashes when users pass:
- Unexpected keyword arguments (**kwargs)
- Misordered positional arguments
- Non-boolean values for offloading or offload_only_non_sliding
### Root causes / API issues:
1. Positional flexibility vs. keyword expectations
- StaticCache accepts config, max_cache_len as positional arguments, but offloading and offload_only_non_sliding were also sometimes passed positionally.
- This can silently assign non-boolean types (e.g., a device object) to offloading, leading to CUDA/TorchDynamo crashes in update().
2. Inconsistent argument handling across cache classes
- Base Cache is strict: offloading and offload_only_non_sliding must be booleans.
- StaticCache had a flexible signature with **kwargs, which could allow invalid arguments without immediate error.
- Deprecated children sometimes passed extra arguments through **kwargs, increasing risk of silent misassignment.
3. Potential user misuse
- Users passing device/dtype arguments positionally (common in examples or from auto-generated code) could crash torch.compile unexpectedly.
- Without clear error messages, this was non-obvious and difficult to debug.
# Reproduction
<img width="858" height="573" alt="image" src="https://github.com/user-attachments/assets/9462271d-d7fc-4803-9f77-204984ab1574" />
Using Qwen3-omni or similar models:
- Here, device and compute_dtype were passed as positional arguments.
- offloading is expected to be a bool, but now receives "cuda:0".
- This triggers the crash: InternalTorchDynamoError: AttributeError: 'int' object has no attribute 'device'
# Solution
1. Validate types in `StaticCache.__init__`:
Ensure `offloading` and `offload_only_non_sliding` are bool.
Raise a clear TypeError if invalid.
2. Catch unknown kwargs:
Raise TypeError if users pass unrecognized keyword arguments (`**kwargs`) to prevent silent misassignment.
3. Forward **kwargs safely in children:
Deprecated children like OffloadedHybridCache are untouched for now, but any kwargs they pass are safely handled in StaticCache.
# Additional Notes
- We could optionally enforce keyword-only arguments via * to fully prevent positional misassignment, but this may break backward compatibility.
- Deprecated cache classes (OffloadedHybridCache, etc.) are left untouched since they will be removed in v4.59.
- A more advanced fix could include runtime coercion of devices to offloading=False to fully prevent user mistakes, but raising errors is safer and explicit.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41169/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41168
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41168/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41168/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41168/events
|
https://github.com/huggingface/transformers/pull/41168
| 3,454,289,486
|
PR_kwDOCUB6oc6qi07_
| 41,168
|
Remove data from examples
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T16:10:45
| 2025-09-26T13:23:17
| 2025-09-26T11:52:45
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41168",
"html_url": "https://github.com/huggingface/transformers/pull/41168",
"diff_url": "https://github.com/huggingface/transformers/pull/41168.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41168.patch",
"merged_at": "2025-09-26T11:52:45"
}
|
Removes the examples telemetry as unused
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41168/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41167
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41167/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41167/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41167/events
|
https://github.com/huggingface/transformers/pull/41167
| 3,454,286,849
|
PR_kwDOCUB6oc6qi0Wn
| 41,167
|
Improve `add_dates` script
|
{
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T16:10:00
| 2025-09-25T20:00:06
| 2025-09-25T20:00:06
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41167",
"html_url": "https://github.com/huggingface/transformers/pull/41167",
"diff_url": "https://github.com/huggingface/transformers/pull/41167.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41167.patch",
"merged_at": "2025-09-25T20:00:06"
}
|
# What does this PR do?
Improve the script to automatically add dates in the model doc files.
One big issue was that the script was checking for the first commit merged into main to get the transformers release dates, but that can't work for new models. So now if the model is not on main (new model), we use today's date.
Also when check-all is not used, we now check which files have been changed against upstream main instead of checking against the local branch.
Also fixes Qwen3OmniMoe doc.
|
{
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41167/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41166
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41166/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41166/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41166/events
|
https://github.com/huggingface/transformers/pull/41166
| 3,454,109,116
|
PR_kwDOCUB6oc6qiNmP
| 41,166
|
[v5] Remove `model_parallel` deprecated feature
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T15:20:27
| 2025-09-29T14:14:05
| 2025-09-29T14:14:03
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41166",
"html_url": "https://github.com/huggingface/transformers/pull/41166",
"diff_url": "https://github.com/huggingface/transformers/pull/41166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41166.patch",
"merged_at": "2025-09-29T14:14:03"
}
|
# What does this PR do?
This PR removes deprecates code related to `model_parallel` feature in the modeling as well as in Trainer. This was a feature that was added only for a few models like t5 and gpt2 but eventually we switched to using device_map and tp now.
For the models, we removes `is_parallelizable` and `model_parallel` attributes and some deprecated methods.
For Trainer, we remove a small section related to `model_parallel`.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41166/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41165
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41165/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41165/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41165/events
|
https://github.com/huggingface/transformers/pull/41165
| 3,453,973,012
|
PR_kwDOCUB6oc6qhvpY
| 41,165
|
[Trainer] deprecate `num_train_tokens`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T14:44:57
| 2025-09-30T15:53:17
| 2025-09-30T15:53:16
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41165",
"html_url": "https://github.com/huggingface/transformers/pull/41165",
"diff_url": "https://github.com/huggingface/transformers/pull/41165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41165.patch",
"merged_at": "2025-09-30T15:53:16"
}
|
# What does this PR do?
This PR deprecates `num_train_tokens` as we already have `include_num_input_tokens_seen` that is better supported.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41165/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41164
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41164/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41164/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41164/events
|
https://github.com/huggingface/transformers/issues/41164
| 3,453,910,717
|
I_kwDOCUB6oc7N3n69
| 41,164
|
cache offloading check is incorrect
|
{
"login": "mobicham",
"id": 37179323,
"node_id": "MDQ6VXNlcjM3MTc5MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mobicham",
"html_url": "https://github.com/mobicham",
"followers_url": "https://api.github.com/users/mobicham/followers",
"following_url": "https://api.github.com/users/mobicham/following{/other_user}",
"gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mobicham/subscriptions",
"organizations_url": "https://api.github.com/users/mobicham/orgs",
"repos_url": "https://api.github.com/users/mobicham/repos",
"events_url": "https://api.github.com/users/mobicham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mobicham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T14:30:25
| 2025-09-29T14:21:27
| 2025-09-29T14:21:27
|
CONTRIBUTOR
| null | null | null | null |
<a href="https://github.com/huggingface/transformers/blob/a579de7f5e00a9fdb1e9828aa3ab78385959f231/src/transformers/cache_utils.py#L766">This check</a> creates some issues with torch.compile. The type hint is <a href="https://github.com/huggingface/transformers/blob/a579de7f5e00a9fdb1e9828aa3ab78385959f231/src/transformers/cache_utils.py#L685">bool</a>, but in some cases, that offloading value is actually a cuda device.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
For example, with Qwen3-omni, if you do this, offloading is a cuda device, which triggers that offloading check, which crashes torch.compile:
```Python
from transformers import StaticCache
past_key_values = StaticCache(model.thinker.config, max_model_len, device, compute_dtype)
print(past_key_values.offloading)
# `cuda:0`- should be False!
```
### Expected behavior
Static cache should not crash with torch.compile.
|
{
"login": "mobicham",
"id": 37179323,
"node_id": "MDQ6VXNlcjM3MTc5MzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mobicham",
"html_url": "https://github.com/mobicham",
"followers_url": "https://api.github.com/users/mobicham/followers",
"following_url": "https://api.github.com/users/mobicham/following{/other_user}",
"gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mobicham/subscriptions",
"organizations_url": "https://api.github.com/users/mobicham/orgs",
"repos_url": "https://api.github.com/users/mobicham/repos",
"events_url": "https://api.github.com/users/mobicham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mobicham/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41164/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41163
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41163/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41163/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41163/events
|
https://github.com/huggingface/transformers/pull/41163
| 3,453,599,470
|
PR_kwDOCUB6oc6qgdpI
| 41,163
|
:rotating_light: [`DistilBert`] Refactor Attention
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T13:17:29
| 2025-10-02T15:50:53
| 2025-10-02T15:50:48
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41163",
"html_url": "https://github.com/huggingface/transformers/pull/41163",
"diff_url": "https://github.com/huggingface/transformers/pull/41163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41163.patch",
"merged_at": "2025-10-02T15:50:48"
}
|
Same as in bert models, refactor to the attn interface + several other utilities (output xxx, kwargs, ...). It didn't have any dependency on bert which is why this was missed. Should enable vLLM support on our side
cc @hmellor @ArthurZucker
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41163/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41162
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41162/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41162/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41162/events
|
https://github.com/huggingface/transformers/pull/41162
| 3,453,424,864
|
PR_kwDOCUB6oc6qf5_F
| 41,162
|
Add Sinhala (සිංහල) translation of README
|
{
"login": "Ranjuna120",
"id": 177449086,
"node_id": "U_kgDOCpOofg",
"avatar_url": "https://avatars.githubusercontent.com/u/177449086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ranjuna120",
"html_url": "https://github.com/Ranjuna120",
"followers_url": "https://api.github.com/users/Ranjuna120/followers",
"following_url": "https://api.github.com/users/Ranjuna120/following{/other_user}",
"gists_url": "https://api.github.com/users/Ranjuna120/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ranjuna120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ranjuna120/subscriptions",
"organizations_url": "https://api.github.com/users/Ranjuna120/orgs",
"repos_url": "https://api.github.com/users/Ranjuna120/repos",
"events_url": "https://api.github.com/users/Ranjuna120/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ranjuna120/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T12:37:55
| 2025-09-25T15:44:34
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41162",
"html_url": "https://github.com/huggingface/transformers/pull/41162",
"diff_url": "https://github.com/huggingface/transformers/pull/41162.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41162.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41162/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41161
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41161/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41161/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41161/events
|
https://github.com/huggingface/transformers/pull/41161
| 3,453,393,190
|
PR_kwDOCUB6oc6qfzHg
| 41,161
|
[v5] Remove old sagemaker api support
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T12:28:22
| 2025-10-09T16:01:04
| 2025-09-30T15:41:52
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41161",
"html_url": "https://github.com/huggingface/transformers/pull/41161",
"diff_url": "https://github.com/huggingface/transformers/pull/41161.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41161.patch",
"merged_at": "2025-09-30T15:41:52"
}
|
# What does this PR do?
This PR cleans a bit the sagemaker code. We remove code related to the old sagemaker API (<1.10) as it is 5 years old. In our setup file, the recommended version is >2.0 also.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41161/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41160
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41160/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41160/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41160/events
|
https://github.com/huggingface/transformers/pull/41160
| 3,453,308,299
|
PR_kwDOCUB6oc6qfg5T
| 41,160
|
[docs] update tips syntax
|
{
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T12:02:43
| 2025-10-01T09:25:11
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41160",
"html_url": "https://github.com/huggingface/transformers/pull/41160",
"diff_url": "https://github.com/huggingface/transformers/pull/41160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41160.patch",
"merged_at": null
}
|
see https://github.com/huggingface/doc-builder?tab=readme-ov-file#tip
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41160/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41159
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41159/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41159/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41159/events
|
https://github.com/huggingface/transformers/pull/41159
| 3,453,090,399
|
PR_kwDOCUB6oc6qexqh
| 41,159
|
Support setting total_train_batch_size.
|
{
"login": "zhengchenyu",
"id": 10381583,
"node_id": "MDQ6VXNlcjEwMzgxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/10381583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhengchenyu",
"html_url": "https://github.com/zhengchenyu",
"followers_url": "https://api.github.com/users/zhengchenyu/followers",
"following_url": "https://api.github.com/users/zhengchenyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhengchenyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhengchenyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhengchenyu/subscriptions",
"organizations_url": "https://api.github.com/users/zhengchenyu/orgs",
"repos_url": "https://api.github.com/users/zhengchenyu/repos",
"events_url": "https://api.github.com/users/zhengchenyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhengchenyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T10:56:27
| 2025-09-25T13:33:35
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41159",
"html_url": "https://github.com/huggingface/transformers/pull/41159",
"diff_url": "https://github.com/huggingface/transformers/pull/41159.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41159.patch",
"merged_at": null
}
|
# What does this PR do?
Set a fixed total_batch_size by adjusting gradient_accumulation_steps. This is extremely useful in elastic training.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41159/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41158
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41158/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41158/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41158/events
|
https://github.com/huggingface/transformers/pull/41158
| 3,452,982,095
|
PR_kwDOCUB6oc6qeaLf
| 41,158
|
fix qwen text config
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T10:23:19
| 2025-09-30T17:23:44
| 2025-09-30T17:23:44
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41158",
"html_url": "https://github.com/huggingface/transformers/pull/41158",
"diff_url": "https://github.com/huggingface/transformers/pull/41158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41158.patch",
"merged_at": "2025-09-30T17:23:44"
}
|
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/41020 and ensures constency in Qwen-VL text config. Side note: prev we had flat dict structure in Qwen and for BC passed kwargs to `super()` and to `text_config`. This caused confusion in TRL which apparently resets some text attributes manually when training
In this PR, Qwen will set/get text related attributes only through text config. The attributes are obtainable from nested config as `config.text_config.vocab_size` and from root as `config.vocab_size` (BC)
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41158/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41158/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41157
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41157/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41157/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41157/events
|
https://github.com/huggingface/transformers/pull/41157
| 3,452,816,339
|
PR_kwDOCUB6oc6qd2Bh
| 41,157
|
Fix white space in documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T09:36:14
| 2025-09-30T23:58:10
| 2025-09-30T16:41:03
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41157",
"html_url": "https://github.com/huggingface/transformers/pull/41157",
"diff_url": "https://github.com/huggingface/transformers/pull/41157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41157.patch",
"merged_at": "2025-09-30T16:41:03"
}
|
# What does this PR do?
As the title says.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41157/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41156
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41156/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41156/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41156/events
|
https://github.com/huggingface/transformers/pull/41156
| 3,452,796,544
|
PR_kwDOCUB6oc6qdxp3
| 41,156
|
Fix inaccurate train_tokens_per_second when resuming from checkpoint
|
{
"login": "lilin-1",
"id": 177207022,
"node_id": "U_kgDOCo_27g",
"avatar_url": "https://avatars.githubusercontent.com/u/177207022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lilin-1",
"html_url": "https://github.com/lilin-1",
"followers_url": "https://api.github.com/users/lilin-1/followers",
"following_url": "https://api.github.com/users/lilin-1/following{/other_user}",
"gists_url": "https://api.github.com/users/lilin-1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lilin-1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lilin-1/subscriptions",
"organizations_url": "https://api.github.com/users/lilin-1/orgs",
"repos_url": "https://api.github.com/users/lilin-1/repos",
"events_url": "https://api.github.com/users/lilin-1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lilin-1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T09:31:31
| 2025-09-29T14:22:35
| 2025-09-29T14:22:35
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41156",
"html_url": "https://github.com/huggingface/transformers/pull/41156",
"diff_url": "https://github.com/huggingface/transformers/pull/41156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41156.patch",
"merged_at": "2025-09-29T14:22:35"
}
|
# Fix inaccurate `train_tokens_per_second` when resuming from checkpoint
## What does this PR do?
This PR addresses the issue where `train_tokens_per_second` calculates incorrectly when resuming training from a checkpoint.
### Problem Background
Previously, `speed_metrics` used the **cumulative total token count** (`self.state.num_input_tokens_seen`) to compute training speed. However, when resuming from a checkpoint:
1. The cumulative token count includes tokens processed in prior sessions.
2. `start_time` is reset to the current session’s start time.
This mismatch led to abnormally high initial `train_tokens_per_second` values (as the cumulative tokens were divided by a short new time window), which gradually decreased but never accurately reflected the current session’s real throughput (reported in #40560).
### Solution
To ensure speed metrics reflect only the current training session:
1. **Capture baseline token count**: In the `_inner_training_loop` method (right after `start_time = time.time()`), save the initial token count of the current session as `initial_num_input_tokens_seen_for_session`.
2. **Calculate session-specific token increment**: When updating logs with `speed_metrics`, compute the number of tokens processed *in the current session* (via `self.state.num_input_tokens_seen - initial_num_input_tokens_seen_for_session`), and pass this increment to `speed_metrics` instead of the cumulative total.
This change ensures `train_tokens_per_second` consistently represents the real-time throughput of the active training session—whether starting fresh or resuming from a checkpoint.
Fixes #40560
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
Links: #40560
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
## Who can review?
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41156/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41155
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41155/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41155/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41155/events
|
https://github.com/huggingface/transformers/pull/41155
| 3,452,749,248
|
PR_kwDOCUB6oc6qdnmE
| 41,155
|
Fix format of compressed_tensors.md
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T09:20:13
| 2025-09-25T14:57:55
| 2025-09-25T14:50:15
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41155",
"html_url": "https://github.com/huggingface/transformers/pull/41155",
"diff_url": "https://github.com/huggingface/transformers/pull/41155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41155.patch",
"merged_at": "2025-09-25T14:50:15"
}
|
# What does this PR do?
As the title says.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41155/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41154
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41154/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41154/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41154/events
|
https://github.com/huggingface/transformers/pull/41154
| 3,452,670,305
|
PR_kwDOCUB6oc6qdW2E
| 41,154
|
Fix single quotes in markdown
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T08:58:53
| 2025-09-25T13:25:10
| 2025-09-25T13:03:26
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41154",
"html_url": "https://github.com/huggingface/transformers/pull/41154",
"diff_url": "https://github.com/huggingface/transformers/pull/41154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41154.patch",
"merged_at": "2025-09-25T13:03:26"
}
|
# What does this PR do?
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41154/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41153
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41153/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41153/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41153/events
|
https://github.com/huggingface/transformers/pull/41153
| 3,452,639,849
|
PR_kwDOCUB6oc6qdQGY
| 41,153
|
Fix: align Qwen2.5-VL inference rope index with training by passing second_per_grid_ts
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T08:52:20
| 2025-09-26T08:09:18
| 2025-09-25T10:33:47
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41153",
"html_url": "https://github.com/huggingface/transformers/pull/41153",
"diff_url": "https://github.com/huggingface/transformers/pull/41153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41153.patch",
"merged_at": "2025-09-25T10:33:47"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Bug
In Qwen2.5-VL, during inference the method `prepare_inputs_for_generation` calls `get_rope_index` without passing `second_per_grid_ts.`
However, during training, `second_per_grid_ts` is explicitly passed.
This leads to an inconsistency between training and inference:
Training: temporal positions are scaled according to `second_per_grid_ts`.
Inference: temporal positions fall back to the default (1.0), which may cause misalignment in temporal position encoding.
## Fix
Update `prepare_inputs_for_generation` to also pass `second_per_grid_ts` to `get_rope_index.`
## Impact
Ensures temporal position encoding consistency between training and inference.
Prevents potential misalignment in multi-frame or video-based inputs.
## Related Issue
Fixes #41152
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41153/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41153/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41152
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41152/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41152/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41152/events
|
https://github.com/huggingface/transformers/issues/41152
| 3,452,620,935
|
I_kwDOCUB6oc7NytCH
| 41,152
|
[Bug] Qwen2.5-VL prepare_inputs_for_generation does not pass second_per_grid_ts to get_rope_index
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T08:47:27
| 2025-09-25T10:33:48
| 2025-09-25T10:33:48
|
CONTRIBUTOR
| null | null | null | null |
### System Info
When using Qwen2.5-VL for generation, the method prepare_inputs_for_generation calls get_rope_index without passing second_per_grid_ts.
```python
def prepare_inputs_for_generation(
self,
input_ids,
past_key_values=None,
attention_mask=None,
inputs_embeds=None,
cache_position=None,
position_ids=None,
use_cache=True,
pixel_values=None,
pixel_values_videos=None,
image_grid_thw=None,
video_grid_thw=None,
second_per_grid_ts=None,
**kwargs,
):
# Overwritten -- in specific circumstances we don't want to forward image inputs to the model
model_inputs = super().prepare_inputs_for_generation(
input_ids,
past_key_values=past_key_values,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds,
cache_position=cache_position,
position_ids=position_ids,
pixel_values=pixel_values,
pixel_values_videos=pixel_values_videos,
image_grid_thw=image_grid_thw,
video_grid_thw=video_grid_thw,
second_per_grid_ts=second_per_grid_ts,
use_cache=use_cache,
**kwargs,
)
# Qwen2-5-VL position_ids are prepared with rope_deltas
if position_ids is None:
# Calculate RoPE index once per generation in the pre-fill stage only.
# When compiling, we can't check tensor values thus we check only input length
# It is safe to assume that `length!=1` means we're in pre-fill because compiled
# models currently cannot do assisted decoding
if cache_position[0] == 0 or self.model.rope_deltas is None:
vision_positions, rope_deltas = self.model.get_rope_index(
model_inputs.get("input_ids", None),
image_grid_thw=image_grid_thw,
video_grid_thw=video_grid_thw,
attention_mask=attention_mask,
# ⚠ second_per_grid_ts is missing here
)
self.model.rope_deltas = rope_deltas
# then use the prev pre-calculated rope-deltas to get the correct position ids
elif "position_ids" in model_inputs:
...
return model_inputs
```
This is inconsistent with the training setup, where `second_per_grid_ts` is provided in `Qwen2_5_VLModel.forward`:
```python
@auto_docstring
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.LongTensor] = None,
past_key_values: Optional[Cache] = None,
inputs_embeds: Optional[torch.FloatTensor] = None,
use_cache: Optional[bool] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
pixel_values: Optional[torch.Tensor] = None,
pixel_values_videos: Optional[torch.FloatTensor] = None,
image_grid_thw: Optional[torch.LongTensor] = None,
video_grid_thw: Optional[torch.LongTensor] = None,
rope_deltas: Optional[torch.LongTensor] = None,
cache_position: Optional[torch.LongTensor] = None,
second_per_grid_ts: Optional[torch.Tensor] = None,
**kwargs: Unpack[TransformersKwargs],
) -> Union[tuple, Qwen2_5_VLModelOutputWithPast]:
r"""
image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
The temporal, height and width of feature shape of each image in LLM.
video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
The temporal, height and width of feature shape of each video in LLM.
rope_deltas (`torch.LongTensor` of shape `(batch_size, )`, *optional*):
The rope index difference between sequence length and multimodal rope.
second_per_grid_ts (`torch.Tensor` of shape `(num_videos)`, *optional*):
The time interval (in seconds) for each grid along the temporal dimension in the 3D position IDs.
"""
...
if position_ids is None:
# Calculate RoPE index once per generation in the pre-fill stage only.
# When compiling, we can't check tensor values thus we check only input length
# It is safe to assume that `length!=1` means we're in pre-fill because compiled
# models currently cannot do asssisted decoding
prefill_compiled_stage = is_torchdynamo_compiling() and (
(input_ids is not None and input_ids.shape[1] != 1)
or (inputs_embeds is not None and inputs_embeds.shape[1] != 1)
)
prefill_noncompiled_stage = not is_torchdynamo_compiling() and (
(cache_position is not None and cache_position[0] == 0)
or (past_key_values is None or past_key_values.get_seq_length() == 0)
)
if (prefill_compiled_stage or prefill_noncompiled_stage) or self.rope_deltas is None:
position_ids, rope_deltas = self.get_rope_index(
input_ids,
image_grid_thw,
video_grid_thw,
second_per_grid_ts=second_per_grid_ts, # Here passing the second_per_grid_ts
attention_mask=attention_mask,
)
self.rope_deltas = rope_deltas
else:
batch_size, seq_length, _ = inputs_embeds.shape
position_ids = torch.arange(seq_length, device=inputs_embeds.device)
position_ids = position_ids.view(1, 1, -1).expand(3, batch_size, -1)
if cache_position is not None:
delta = (cache_position[0] + self.rope_deltas).to(inputs_embeds.device)
else:
delta = torch.zeros((batch_size, seq_length), device=inputs_embeds.device)
delta = delta.repeat_interleave(batch_size // delta.shape[0], dim=1)
position_ids = position_ids + delta.to(position_ids.device)
...
return output if return_dict else output.to_tuple()
```
As a result, the model may compute incorrect RoPE indices for video inputs, especially when variable frame rates or temporal scaling are involved.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load Qwen2.5-VL from Hugging Face:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-VL")
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL")
```
2. Run generation with video inputs.
3. Check prepare_inputs_for_generation:
```python
vision_positions, rope_deltas = self.model.get_rope_index(
model_inputs.get("input_ids", None),
image_grid_thw=image_grid_thw,
video_grid_thw=video_grid_thw,
attention_mask=attention_mask,
# ⚠ second_per_grid_ts is missing here
)
```
### Expected behavior
- `second_per_grid_ts` should be passed into `get_rope_index` during generation, to align with training behavior.
- This ensures temporal RoPE indices are computed consistently between training and inference.
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41152/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41151
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41151/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41151/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41151/events
|
https://github.com/huggingface/transformers/issues/41151
| 3,452,178,745
|
I_kwDOCUB6oc7NxBE5
| 41,151
|
Incorrect `norm_topk_prob` config for loading Qwen3 MoE GGUF
|
{
"login": "yuko29",
"id": 20753810,
"node_id": "MDQ6VXNlcjIwNzUzODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/20753810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuko29",
"html_url": "https://github.com/yuko29",
"followers_url": "https://api.github.com/users/yuko29/followers",
"following_url": "https://api.github.com/users/yuko29/following{/other_user}",
"gists_url": "https://api.github.com/users/yuko29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuko29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuko29/subscriptions",
"organizations_url": "https://api.github.com/users/yuko29/orgs",
"repos_url": "https://api.github.com/users/yuko29/repos",
"events_url": "https://api.github.com/users/yuko29/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuko29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T06:51:48
| 2025-10-15T10:41:21
| null |
NONE
| null | null | null | null |
For Qwen3 MoE models (e.g., `Qwen/Qwen3-30B-A3B-Instruct-2507`), the released [`config.json`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/config.json#L20) specifies `norm_topk_prob = True`. However, when loading with GGUF format, this parameter defaults to `False` instead of inheriting the correct setting. In particular, `norm_topk_prob` is not currently configured in the GGUF format.
It leads to incorrect forward process for the models.
I'm not sure if the correct fix is to explicitly set `norm_topk_prob = True` by default when loading Qwen3 MoE GGUF models.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41151/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41150
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41150/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41150/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41150/events
|
https://github.com/huggingface/transformers/issues/41150
| 3,452,177,660
|
I_kwDOCUB6oc7NxAz8
| 41,150
|
Error message: XPU out of memory. Tried to allocate 4.00 GiB (GPU 0; 15.11 GiB total capacity; 0 bytes already allocated; 0 bytes reserved in total by PyTorch)
|
{
"login": "fablevi",
"id": 97455713,
"node_id": "U_kgDOBc8OYQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97455713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fablevi",
"html_url": "https://github.com/fablevi",
"followers_url": "https://api.github.com/users/fablevi/followers",
"following_url": "https://api.github.com/users/fablevi/following{/other_user}",
"gists_url": "https://api.github.com/users/fablevi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fablevi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fablevi/subscriptions",
"organizations_url": "https://api.github.com/users/fablevi/orgs",
"repos_url": "https://api.github.com/users/fablevi/repos",
"events_url": "https://api.github.com/users/fablevi/events{/privacy}",
"received_events_url": "https://api.github.com/users/fablevi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-25T06:51:20
| 2025-10-27T12:35:40
| null |
NONE
| null | null | null | null |
### System Info
Soo the problem is simple:
If i only install xpu torch and intel_extension_for_pytorch then all my 16gbvram is ready but if i install transformers its only 4gb vram is aviable.
With this the 16gb vram works:
The code:
import torch
from torch import xpu
import os
import intel_extension_for_pytorch
os.environ['UR_L0_USE_RELAXED_ALLOCATION_LIMITS'] = '1'
os.environ['IGC_ExtraOCLOptions'] = "-cl-intel-greater-than-4GB-buffer-required"
def xpu_memory_test():
if not xpu.is_available():
print("XPU not available!")
return
device = torch.device("xpu")
print(f"\n=== XPU Memory Test ===")
try:
print(f"Device Name: {torch.xpu.get_device_name(device)}")
max_alloc = torch.xpu.max_memory_allocated(device) / (1024**3)
total_mem = torch.xpu.get_device_properties(device).total_memory / (1024**3)
print(f"\nDevice Memory: {total_mem:.2f}GB total")
print(f"Max allocated during session: {max_alloc:.2f}GB")
size_step = 0.1
current_size = 1.0
last_success = 0
while current_size <= total_mem:
tensor_size = int(current_size * (1024**3 / 4))
print(f"\nAttempting to allocate {current_size}GB tensor...")
try:
test_tensor = torch.empty(tensor_size, dtype=torch.float32, device=device)
torch.xpu.synchronize(device)
allocated = torch.xpu.memory_allocated(device) / (1024**3)
print(f"Success! Current allocated: {allocated:.2f}GB")
os.system("free -h")
del test_tensor
torch.xpu.empty_cache()
last_success = current_size
current_size += size_step
except RuntimeError as e:
print(f"\nAllocation failed at {current_size}GB (last success: {last_success}GB)")
print(f"Error message: {str(e)}")
os.system("free -h")
break
except Exception as e:
print(f"\nError during memory test: {str(e)}")
finally:
allocated = torch.xpu.memory_allocated(device) / (1024**3)
print(f"\n! Final allocated memory: {allocated:.2f}GB")
print("Test completed.")
if __name__ == "__main__":
xpu_memory_test()
torch.xpu.empty_cache()
pip list (no transformers)
Package Version
--------------------------- ----------
accelerate 1.10.1
certifi 2025.8.3
charset-normalizer 3.4.3
dpcpp-cpp-rt 2025.0.4
filelock 3.13.1
fsspec 2024.6.1
hf-xet 1.1.10
huggingface-hub 0.35.1
idna 3.10
impi-devel 2021.14.1
impi-rt 2021.14.1
intel-cmplr-lib-rt 2025.0.4
intel-cmplr-lib-ur 2025.0.4
intel-cmplr-lic-rt 2025.0.4
intel_extension_for_pytorch 2.7.10+xpu
intel-opencl-rt 2025.0.4
intel-openmp 2025.0.4
intel-pti 0.10.1
intel-sycl-rt 2025.0.4
Jinja2 3.1.4
MarkupSafe 2.1.5
mkl 2025.0.1
mkl-dpcpp 2025.0.1
mpmath 1.3.0
networkx 3.3
numpy 2.1.2
oneccl 2021.14.1
oneccl-bind-pt 2.7.0+xpu
oneccl-devel 2021.14.1
onemkl-sycl-blas 2025.0.1
onemkl-sycl-datafitting 2025.0.1
onemkl-sycl-dft 2025.0.1
onemkl-sycl-lapack 2025.0.1
onemkl-sycl-rng 2025.0.1
onemkl-sycl-sparse 2025.0.1
onemkl-sycl-stats 2025.0.1
onemkl-sycl-vm 2025.0.1
packaging 25.0
pillow 11.0.0
pip 24.0
psutil 7.1.0
pytorch-triton-xpu 3.3.0
PyYAML 6.0.2
regex 2025.9.18
requests 2.32.5
ruamel.yaml 0.18.15
ruamel.yaml.clib 0.2.14
safetensors 0.6.2
setuptools 65.5.0
sympy 1.13.3
tbb 2022.2.0
tcmlib 1.2.0
tokenizers 0.22.1
torch 2.7.0+xpu
torchaudio 2.7.0+xpu
torchvision 0.22.0+xpu
tqdm 4.67.1
typing_extensions 4.12.2
umf 0.9.1
urllib3 2.5.0
But if i install transformers its go back to 4gb vram:
pip list (with transformers)
Package Version
--------------------------- ----------
accelerate 1.10.1
certifi 2025.8.3
charset-normalizer 3.4.3
dpcpp-cpp-rt 2025.0.4
filelock 3.13.1
fsspec 2024.6.1
hf-xet 1.1.10
huggingface-hub 0.35.1
idna 3.10
impi-devel 2021.14.1
impi-rt 2021.14.1
intel-cmplr-lib-rt 2025.0.4
intel-cmplr-lib-ur 2025.0.4
intel-cmplr-lic-rt 2025.0.4
intel_extension_for_pytorch 2.7.10+xpu
intel-opencl-rt 2025.0.4
intel-openmp 2025.0.4
intel-pti 0.10.1
intel-sycl-rt 2025.0.4
Jinja2 3.1.4
MarkupSafe 2.1.5
mkl 2025.0.1
mkl-dpcpp 2025.0.1
mpmath 1.3.0
networkx 3.3
numpy 2.1.2
oneccl 2021.14.1
oneccl-bind-pt 2.7.0+xpu
oneccl-devel 2021.14.1
onemkl-sycl-blas 2025.0.1
onemkl-sycl-datafitting 2025.0.1
onemkl-sycl-dft 2025.0.1
onemkl-sycl-lapack 2025.0.1
onemkl-sycl-rng 2025.0.1
onemkl-sycl-sparse 2025.0.1
onemkl-sycl-stats 2025.0.1
onemkl-sycl-vm 2025.0.1
packaging 25.0
pillow 11.0.0
pip 24.0
psutil 7.1.0
pytorch-triton-xpu 3.3.0
PyYAML 6.0.2
regex 2025.9.18
requests 2.32.5
ruamel.yaml 0.18.15
ruamel.yaml.clib 0.2.14
safetensors 0.6.2
setuptools 65.5.0
sympy 1.13.3
tbb 2022.2.0
tcmlib 1.2.0
tokenizers 0.22.1
torch 2.7.0+xpu
torchaudio 2.7.0+xpu
torchvision 0.22.0+xpu
tqdm 4.67.1
transformers 4.56.2
typing_extensions 4.12.2
umf 0.9.1
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41150/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41149
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41149/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41149/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41149/events
|
https://github.com/huggingface/transformers/pull/41149
| 3,452,135,975
|
PR_kwDOCUB6oc6qbi8e
| 41,149
|
Improve image_utils reliability: safe PIL imports, torchvision error handling, and bug fixes
|
{
"login": "paytonison",
"id": 148833579,
"node_id": "U_kgDOCN8FKw",
"avatar_url": "https://avatars.githubusercontent.com/u/148833579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paytonison",
"html_url": "https://github.com/paytonison",
"followers_url": "https://api.github.com/users/paytonison/followers",
"following_url": "https://api.github.com/users/paytonison/following{/other_user}",
"gists_url": "https://api.github.com/users/paytonison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paytonison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paytonison/subscriptions",
"organizations_url": "https://api.github.com/users/paytonison/orgs",
"repos_url": "https://api.github.com/users/paytonison/repos",
"events_url": "https://api.github.com/users/paytonison/events{/privacy}",
"received_events_url": "https://api.github.com/users/paytonison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T06:37:45
| 2025-09-28T04:08:45
| 2025-09-28T04:08:45
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41149",
"html_url": "https://github.com/huggingface/transformers/pull/41149",
"diff_url": "https://github.com/huggingface/transformers/pull/41149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41149.patch",
"merged_at": null
}
|
## Fix robust vision import handling in image_utils.py
This PR makes the vision-related imports in `image_utils.py` more robust and resilient to import failures, particularly for text-only users who may have broken or incomplete vision dependencies.
### Changes Made
#### 🔧 **Robust PIL Import Handling**
- Changed from direct `PIL.Image` and `PIL.ImageOps` imports to safer `import PIL` pattern
- Prevents crashes when PIL modules are partially available or corrupted
#### 🎯 **Custom PILImageResampling Enum**
- Replaced dependency on `PIL.Image.Resampling` with a custom `PILImageResampling(ExplicitEnum)` class
- Ensures consistent resampling constants across different PIL versions
- Values: NEAREST(0), LANCZOS(1), BILINEAR(2), BICUBIC(3), BOX(4), HAMMING(5)
#### 🛡️ **Torchvision Import Error Handling**
- Wrapped torchvision imports in try/except blocks with proper logging
- Gracefully degrades when torchvision is installed but broken
- Warns users about torchvision issues while allowing text-only workflows to continue
- Sets safe fallback values (`InterpolationMode = None`, empty mapping dict)
#### 🔧 **Code Quality Improvements**
- **concatenate_list refactor**: Improved clarity and efficiency by extracting `first_item = input_list[0]`
- **Bug fix**: Fixed variable name error in `validate_annotations` (was `{format}`, now correctly `{annotation_format}`)
- **Import optimization**: Added safer torch tensor detection with `is_torch_tensor()`
### Benefits
- **Improved reliability** for text-only users with incomplete vision setups
- **Better error handling** with informative warning messages
- **Backward compatibility** maintained while improving robustness
- **Cleaner code** with better separation of concerns
### Testing
The changes maintain full API compatibility and should not break existing functionality. Vision-dependent features will continue to work when dependencies are properly installed, while gracefully degrading for text-only environments.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts @qubvel - Vision models and image processing utilities
|
{
"login": "paytonison",
"id": 148833579,
"node_id": "U_kgDOCN8FKw",
"avatar_url": "https://avatars.githubusercontent.com/u/148833579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paytonison",
"html_url": "https://github.com/paytonison",
"followers_url": "https://api.github.com/users/paytonison/followers",
"following_url": "https://api.github.com/users/paytonison/following{/other_user}",
"gists_url": "https://api.github.com/users/paytonison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paytonison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paytonison/subscriptions",
"organizations_url": "https://api.github.com/users/paytonison/orgs",
"repos_url": "https://api.github.com/users/paytonison/repos",
"events_url": "https://api.github.com/users/paytonison/events{/privacy}",
"received_events_url": "https://api.github.com/users/paytonison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41149/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41148
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41148/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41148/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41148/events
|
https://github.com/huggingface/transformers/pull/41148
| 3,452,008,114
|
PR_kwDOCUB6oc6qbHOy
| 41,148
|
Guard torchvision imports to prevent crashes in text-only pipelines
|
{
"login": "paytonison",
"id": 148833579,
"node_id": "U_kgDOCN8FKw",
"avatar_url": "https://avatars.githubusercontent.com/u/148833579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paytonison",
"html_url": "https://github.com/paytonison",
"followers_url": "https://api.github.com/users/paytonison/followers",
"following_url": "https://api.github.com/users/paytonison/following{/other_user}",
"gists_url": "https://api.github.com/users/paytonison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paytonison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paytonison/subscriptions",
"organizations_url": "https://api.github.com/users/paytonison/orgs",
"repos_url": "https://api.github.com/users/paytonison/repos",
"events_url": "https://api.github.com/users/paytonison/events{/privacy}",
"received_events_url": "https://api.github.com/users/paytonison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T05:48:34
| 2025-09-25T06:22:28
| 2025-09-25T06:17:42
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41148",
"html_url": "https://github.com/huggingface/transformers/pull/41148",
"diff_url": "https://github.com/huggingface/transformers/pull/41148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41148.patch",
"merged_at": null
}
|
### Problem
Users running text-only pipelines (like pipeline("text-generation")) currently crash if torchvision is installed but not version-matched with torch. The error looks like:
```
RuntimeError: operator torchvision::nms does not exist
...
ModuleNotFoundError: Could not import module 'pipeline'
```
This happens because image_utils.py unconditionally imports torchvision transforms at import time, even if the user never touches an image pipeline.
Solution
* Guard torchvision imports in image_utils.py with try/except.
* If torchvision fails to import, log a warning and fall back to no-op mappings.
* Vision-specific pipelines will still raise later if torchvision is truly required, but text-only workflows are unaffected.
Impact
* Fixes a common pain point for text-only users who don’t care about image pipelines.
* No behavior change for healthy torch/torchvision stacks.
* Provides a clearer warning instead of a hard crash.
How to test
* Uninstall torchvision: pip uninstall torchvision → text-only pipeline still works.
* Install mismatched torchvision (simulate bad build) → text-only pipeline still works with warning.
* Run vision pipeline with working torchvision → behaves as before.
Notes
This change is deliberately minimal and does not alter API. It only makes import-time more robust.
Resolves #41146.
---
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
@ArthurZucker
Library:
@Rocketknight1
|
{
"login": "paytonison",
"id": 148833579,
"node_id": "U_kgDOCN8FKw",
"avatar_url": "https://avatars.githubusercontent.com/u/148833579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paytonison",
"html_url": "https://github.com/paytonison",
"followers_url": "https://api.github.com/users/paytonison/followers",
"following_url": "https://api.github.com/users/paytonison/following{/other_user}",
"gists_url": "https://api.github.com/users/paytonison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paytonison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paytonison/subscriptions",
"organizations_url": "https://api.github.com/users/paytonison/orgs",
"repos_url": "https://api.github.com/users/paytonison/repos",
"events_url": "https://api.github.com/users/paytonison/events{/privacy}",
"received_events_url": "https://api.github.com/users/paytonison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41148/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41147
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41147/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41147/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41147/events
|
https://github.com/huggingface/transformers/pull/41147
| 3,451,926,651
|
PR_kwDOCUB6oc6qa1aH
| 41,147
|
add rotary kernel support to Qwen3 model
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T05:13:28
| 2025-10-29T08:30:10
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41147",
"html_url": "https://github.com/huggingface/transformers/pull/41147",
"diff_url": "https://github.com/huggingface/transformers/pull/41147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41147.patch",
"merged_at": null
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41147/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41146
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41146/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41146/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41146/events
|
https://github.com/huggingface/transformers/issues/41146
| 3,451,893,944
|
I_kwDOCUB6oc7Nv7i4
| 41,146
|
pipeline import fails with torchvision::nms operator mismatch
|
{
"login": "paytonison",
"id": 148833579,
"node_id": "U_kgDOCN8FKw",
"avatar_url": "https://avatars.githubusercontent.com/u/148833579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paytonison",
"html_url": "https://github.com/paytonison",
"followers_url": "https://api.github.com/users/paytonison/followers",
"following_url": "https://api.github.com/users/paytonison/following{/other_user}",
"gists_url": "https://api.github.com/users/paytonison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paytonison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paytonison/subscriptions",
"organizations_url": "https://api.github.com/users/paytonison/orgs",
"repos_url": "https://api.github.com/users/paytonison/repos",
"events_url": "https://api.github.com/users/paytonison/events{/privacy}",
"received_events_url": "https://api.github.com/users/paytonison/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-25T04:56:07
| 2025-10-25T08:01:59
| null |
NONE
| null | null | null | null |
### System Info
```bash
Traceback (most recent call last):
File "/usr/local/bin/transformers", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/usr/local/lib/python3.11/dist-packages/transformers/commands/transformers_cli.py", line 19, in <module>
from transformers.commands.add_new_model_like import AddNewModelLikeCommand
File "/usr/local/lib/python3.11/dist-packages/transformers/commands/add_new_model_like.py", line 26, in <module>
from ..models.auto.image_processing_auto import IMAGE_PROCESSOR_MAPPING_NAMES
File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/image_processing_auto.py", line 27, in <module>
from ...image_processing_utils import ImageProcessingMixin
File "/usr/local/lib/python3.11/dist-packages/transformers/image_processing_utils.py", line 21, in <module>
from .image_processing_base import BatchFeature, ImageProcessingMixin
File "/usr/local/lib/python3.11/dist-packages/transformers/image_processing_base.py", line 26, in <module>
from .image_utils import is_valid_image, load_image
File "/usr/local/lib/python3.11/dist-packages/transformers/image_utils.py", line 60, in <module>
from torchvision.transforms import InterpolationMode
File "/usr/local/lib/python3.11/dist-packages/torchvision/__init__.py", line 10, in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torchvision/_meta_registrations.py", line 163, in <module>
@torch.library.register_fake("torchvision::nms")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/library.py", line 1069, in register
use_lib._register_fake(
File "/usr/local/lib/python3.11/dist-packages/torch/library.py", line 219, in _register_fake
handle = entry.fake_impl.register(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/_library/fake_impl.py", line 50, in register
if torch._C._dispatch_has_kernel_for_dispatch_key(self.qualname, "Meta"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: operator torchvision::nms does not exist
```
### Who can help?
@Rocketknight1 @ArthurZucker
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```py
from transformers import pipeline
pipe = pipeline("text-generation", model="google/gemma-3-270m")
```
### Expected behavior
For text-only pipelines, transformers should not crash due to torchvision import issues.
Either:
* Skip torchvision imports unless required (e.g., when instantiating image/audio pipelines), or
* Provide a clearer error message pointing to the Torch ↔︎ Torchvision version mismatch.
Workarounds:
* Uninstalling torchvision resolves the issue for text-only models.
* Alternatively, pinning matching torch / torchvision builds (e.g., torch==2.4.1, torchvision==0.19.1) avoids the error.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41146/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41145
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41145/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41145/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41145/events
|
https://github.com/huggingface/transformers/pull/41145
| 3,451,863,438
|
PR_kwDOCUB6oc6qanyW
| 41,145
|
Add Rope kernel support to Qwen3 model
|
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T04:42:12
| 2025-09-25T05:12:52
| 2025-09-25T05:12:52
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41145",
"html_url": "https://github.com/huggingface/transformers/pull/41145",
"diff_url": "https://github.com/huggingface/transformers/pull/41145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41145.patch",
"merged_at": null
}
| null |
{
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41145/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41144
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41144/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41144/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41144/events
|
https://github.com/huggingface/transformers/pull/41144
| 3,451,772,577
|
PR_kwDOCUB6oc6qaUJh
| 41,144
|
Support automatic conversion from zero checkpoint to universal checkpoint.
|
{
"login": "zhengchenyu",
"id": 10381583,
"node_id": "MDQ6VXNlcjEwMzgxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/10381583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhengchenyu",
"html_url": "https://github.com/zhengchenyu",
"followers_url": "https://api.github.com/users/zhengchenyu/followers",
"following_url": "https://api.github.com/users/zhengchenyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhengchenyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhengchenyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhengchenyu/subscriptions",
"organizations_url": "https://api.github.com/users/zhengchenyu/orgs",
"repos_url": "https://api.github.com/users/zhengchenyu/repos",
"events_url": "https://api.github.com/users/zhengchenyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhengchenyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-25T03:55:26
| 2025-10-02T02:26:50
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41144",
"html_url": "https://github.com/huggingface/transformers/pull/41144",
"diff_url": "https://github.com/huggingface/transformers/pull/41144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41144.patch",
"merged_at": null
}
|
# What does this PR do?
When deepspeed's world_size changes, ds checkpoints need to be converted to universal checkpoints. This PR is used to support automatic conversion.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41144/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41143
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41143/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41143/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41143/events
|
https://github.com/huggingface/transformers/pull/41143
| 3,451,746,619
|
PR_kwDOCUB6oc6qaOp_
| 41,143
|
Adapt to the SDPA interface to enable the NPU to call FlashAttentionScore
|
{
"login": "frozenleaves",
"id": 46097299,
"node_id": "MDQ6VXNlcjQ2MDk3Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/46097299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frozenleaves",
"html_url": "https://github.com/frozenleaves",
"followers_url": "https://api.github.com/users/frozenleaves/followers",
"following_url": "https://api.github.com/users/frozenleaves/following{/other_user}",
"gists_url": "https://api.github.com/users/frozenleaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frozenleaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frozenleaves/subscriptions",
"organizations_url": "https://api.github.com/users/frozenleaves/orgs",
"repos_url": "https://api.github.com/users/frozenleaves/repos",
"events_url": "https://api.github.com/users/frozenleaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/frozenleaves/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T03:40:34
| 2025-09-30T14:19:58
| 2025-09-30T14:19:58
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41143",
"html_url": "https://github.com/huggingface/transformers/pull/41143",
"diff_url": "https://github.com/huggingface/transformers/pull/41143.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41143.patch",
"merged_at": "2025-09-30T14:19:58"
}
|
# What does this PR do?
Enable the torhc native SDPA interface to invoke the FlashAttentionScore operator of NPU.
----
### Restrictions on the use of torch native interface calling FA operator:
1. When is_causal calculation is enabled, attn_mask must be None; when is_causal is not enabled, if attn_mask enters valid data, the input data type must be Bool type.
2. When entering the data types of query, key, value, and enables requirements_grad, executes the FA operator.
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41143/timeline
| null | null | null | null | true
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.