url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
dict | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool | is_closed
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/41142
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41142/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41142/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41142/events
|
https://github.com/huggingface/transformers/pull/41142
| 3,451,668,971
|
PR_kwDOCUB6oc6qZ-Zf
| 41,142
|
Fix typing of train_args
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T03:03:08
| 2025-09-30T14:52:17
| 2025-09-30T14:28:03
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41142",
"html_url": "https://github.com/huggingface/transformers/pull/41142",
"diff_url": "https://github.com/huggingface/transformers/pull/41142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41142.patch",
"merged_at": "2025-09-30T14:28:03"
}
|
# What does this PR do?
Some `Optional` typing is quite confusing because their default values is not `None`.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41142/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41141
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41141/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41141/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41141/events
|
https://github.com/huggingface/transformers/issues/41141
| 3,451,662,789
|
I_kwDOCUB6oc7NvDHF
| 41,141
|
Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer.
|
{
"login": "meet-minimalist",
"id": 26968850,
"node_id": "MDQ6VXNlcjI2OTY4ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/26968850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meet-minimalist",
"html_url": "https://github.com/meet-minimalist",
"followers_url": "https://api.github.com/users/meet-minimalist/followers",
"following_url": "https://api.github.com/users/meet-minimalist/following{/other_user}",
"gists_url": "https://api.github.com/users/meet-minimalist/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meet-minimalist/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meet-minimalist/subscriptions",
"organizations_url": "https://api.github.com/users/meet-minimalist/orgs",
"repos_url": "https://api.github.com/users/meet-minimalist/repos",
"events_url": "https://api.github.com/users/meet-minimalist/events{/privacy}",
"received_events_url": "https://api.github.com/users/meet-minimalist/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 2760822153,
"node_id": "MDU6TGFiZWwyNzYwODIyMTUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tensor%20Parallel",
"name": "Tensor Parallel",
"color": "1AD0A8",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-25T03:01:02
| 2025-10-15T21:41:20
| null |
NONE
| null | null | null | null |
### Feature request
I have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully stitch things together to make the only TP based training work. Please help.
Ref:
- https://github.com/huggingface/transformers/blob/main/examples/3D_parallel.py
### Motivation
Need to enable only TP based training, but no tutorial or example is available.
### Your contribution
Given proper understanding and proper guidance, I can come up with clean example and documentation for the same.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41141/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41140
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41140/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41140/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41140/events
|
https://github.com/huggingface/transformers/issues/41140
| 3,451,641,412
|
I_kwDOCUB6oc7Nu95E
| 41,140
|
Is`_index_first_axis` functionally equal to FA2's `index_first_axis`
|
{
"login": "A1waysBeenHere",
"id": 71289623,
"node_id": "MDQ6VXNlcjcxMjg5NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/71289623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/A1waysBeenHere",
"html_url": "https://github.com/A1waysBeenHere",
"followers_url": "https://api.github.com/users/A1waysBeenHere/followers",
"following_url": "https://api.github.com/users/A1waysBeenHere/following{/other_user}",
"gists_url": "https://api.github.com/users/A1waysBeenHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/A1waysBeenHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/A1waysBeenHere/subscriptions",
"organizations_url": "https://api.github.com/users/A1waysBeenHere/orgs",
"repos_url": "https://api.github.com/users/A1waysBeenHere/repos",
"events_url": "https://api.github.com/users/A1waysBeenHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/A1waysBeenHere/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-25T02:51:35
| 2025-09-28T08:36:19
| 2025-09-28T08:36:19
|
NONE
| null | null | null | null |
Ref to the #40002 . This PR consoliated this function into `transformers.modeling_falsh_attention_utils` and removed the `index_first_axis` function that directly copied from flash-attention repo. I am wondering if they are functionally equally.
```python
# The shape is recovered to the original one at the end, and ref to the comment,
# it actually equal to return input[indices]
class IndexFirstAxis(torch.autograd.Function):
@staticmethod
def forward(ctx, input, indices):
ctx.save_for_backward(indices)
assert input.ndim >= 2
ctx.first_axis_dim, other_shape = input.shape[0], input.shape[1:]
second_dim = other_shape.numel()
# TD [2022-03-04] For some reason torch.gather is a bit faster than indexing.
# return input[indices]
return torch.gather(
rearrange(input, "b ... -> b (...)"), 0, repeat(indices, "z -> z d", d=second_dim)
).reshape(-1, *other_shape)
# The currently one, I don't think the shape are exactly same here, cuz no shape recoverd
def _index_first_axis(tensor, indices):
"""
A local implementation of the PyTorch indexing operation `tensor[indices]` on the first axis,
after flattening the first two dimensions of the tensor. This is functionally equivalent to
FA2's `index_first_axis` and replaces the need to import it.
"""
# The input tensor is expected to be of shape (batch, seq_len, ...). We flatten the first
# two dimensions to get (total_tokens, ...) before indexing.
reshaped_tensor = tensor.reshape(-1, *tensor.shape[2:])
return reshaped_tensor[indices]
```
|
{
"login": "A1waysBeenHere",
"id": 71289623,
"node_id": "MDQ6VXNlcjcxMjg5NjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/71289623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/A1waysBeenHere",
"html_url": "https://github.com/A1waysBeenHere",
"followers_url": "https://api.github.com/users/A1waysBeenHere/followers",
"following_url": "https://api.github.com/users/A1waysBeenHere/following{/other_user}",
"gists_url": "https://api.github.com/users/A1waysBeenHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/A1waysBeenHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/A1waysBeenHere/subscriptions",
"organizations_url": "https://api.github.com/users/A1waysBeenHere/orgs",
"repos_url": "https://api.github.com/users/A1waysBeenHere/repos",
"events_url": "https://api.github.com/users/A1waysBeenHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/A1waysBeenHere/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41140/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41139
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41139/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41139/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41139/events
|
https://github.com/huggingface/transformers/issues/41139
| 3,450,752,587
|
I_kwDOCUB6oc7Nrk5L
| 41,139
|
Suggestion: Support HMP protocol for multi-agent LLM workflows
|
{
"login": "kagvi13",
"id": 219128991,
"node_id": "U_kgDODQ-knw",
"avatar_url": "https://avatars.githubusercontent.com/u/219128991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kagvi13",
"html_url": "https://github.com/kagvi13",
"followers_url": "https://api.github.com/users/kagvi13/followers",
"following_url": "https://api.github.com/users/kagvi13/following{/other_user}",
"gists_url": "https://api.github.com/users/kagvi13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kagvi13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kagvi13/subscriptions",
"organizations_url": "https://api.github.com/users/kagvi13/orgs",
"repos_url": "https://api.github.com/users/kagvi13/repos",
"events_url": "https://api.github.com/users/kagvi13/events{/privacy}",
"received_events_url": "https://api.github.com/users/kagvi13/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T19:52:57
| 2025-09-25T12:47:12
| 2025-09-25T12:47:12
|
NONE
| null | null | null | null |
### Feature request
Enable **decentralized agent-to-agent communication** for Transformers models, allowing multiple agents to share knowledge, maintain cognitive diaries, and coordinate actions via [HyperCortex Mesh Protocol (HMP)](https://github.com/kagvi13/HMP).
### Motivation
Currently, Transformers pipelines operate mainly as individual agents. Adding HMP support would allow models to:
* Form **processed synthetic knowledge** instead of only raw outputs.
* Share reasoning, experience, and memory traces between agents.
* Participate in distributed cognitive workflows, enhancing collaboration and learning.
### Your contribution
We have developed an experimental protocol and reference implementation:
* [HMP-0004-v4.1.md](https://github.com/kagvi13/HMP/blob/main/docs/HMP-0004-v4.1.md) — protocol specification
* [HMP-Ethics.md](https://github.com/kagvi13/HMP/blob/main/docs/HMP-Ethics.md) — ethics and validation
* [dht\_protocol.md](https://github.com/kagvi13/HMP/blob/main/docs/dht_protocol.md) — P2P discovery
* [HMP-agent-REPL-cycle.md](https://github.com/kagvi13/HMP/blob/main/docs/HMP-agent-REPL-cycle.md) — example agent cycle
We’re happy to provide guidance or examples for integrating HMP into Transformers pipelines if there’s interest.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41139/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41138
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41138/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41138/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41138/events
|
https://github.com/huggingface/transformers/pull/41138
| 3,450,750,687
|
PR_kwDOCUB6oc6qW5op
| 41,138
|
Make quantizers good citizens loading-wise
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T19:52:29
| 2025-09-30T09:50:52
| 2025-09-29T15:04:45
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41138",
"html_url": "https://github.com/huggingface/transformers/pull/41138",
"diff_url": "https://github.com/huggingface/transformers/pull/41138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41138.patch",
"merged_at": "2025-09-29T15:04:45"
}
|
# What does this PR do?
The quantizers should NEVER need to access the `state_dict` during `param_needs_quantization` or `create_quantized_param`. This is mostly for the following reasons:
- Most models have a lot of different state dict (due to restricting the size of each file to 5GB), so relying on other params of the state dict for quantizing a given param is a very bad idea -> if the needed params happen to live on different saved state dicts (representing the same model), then the whole loading logic collapses without any possibility of fallback
- Without quantizers, we load parameters one by one without ever materializing the whole state dict at once, and it's quite annoying to have to change that logic only for a few quantizers, as it's a good thing in general
If the quantizers need several serialized keys to retrieve a given quantized param, then they should cache them individually, and then quantize back once all the needed keys are retrieved. Note that this DOES NOT incur a memory cost, as previously the whole state dict was loaded on memory, so the memory footprint was always higher before (-> a param will necessarily get quantized back, and its serialized tensors cleaned, before the whole state dict is materialized, assuming they all reside on the same state_dict as was done previously). So we actually have a gain of memory during loading with this PR.
Note that the current PR has less slow tests failing than main, especially for bitsandbytes, i.e. it actually solves a few issues in addition to simplifying the whole loading logic of the library.
A following PR will come to unify the `update_expected_keys`, `update_missing_keys`, and `update_unexpected_keys` methods, as some are currently redundant/not practical regarding the whole loading logic.
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41138/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41137
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41137/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41137/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41137/events
|
https://github.com/huggingface/transformers/pull/41137
| 3,450,389,585
|
PR_kwDOCUB6oc6qVslv
| 41,137
|
:rotating_light: [`v5`] Remove legacy cache
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T17:59:42
| 2025-10-20T14:14:57
| 2025-09-25T11:11:19
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41137",
"html_url": "https://github.com/huggingface/transformers/pull/41137",
"diff_url": "https://github.com/huggingface/transformers/pull/41137.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41137.patch",
"merged_at": null
}
|
As per title, however this really is only a draft for now. Only checked for basic removal
Legacy cache still exists but only for a few special cases atm (whisper, rag, ...)
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41137/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41136
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41136/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41136/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41136/events
|
https://github.com/huggingface/transformers/issues/41136
| 3,450,150,913
|
I_kwDOCUB6oc7NpSAB
| 41,136
|
Real time sentiment analysis dashboard
|
{
"login": "3015pavan",
"id": 197371018,
"node_id": "U_kgDOC8Okig",
"avatar_url": "https://avatars.githubusercontent.com/u/197371018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3015pavan",
"html_url": "https://github.com/3015pavan",
"followers_url": "https://api.github.com/users/3015pavan/followers",
"following_url": "https://api.github.com/users/3015pavan/following{/other_user}",
"gists_url": "https://api.github.com/users/3015pavan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3015pavan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3015pavan/subscriptions",
"organizations_url": "https://api.github.com/users/3015pavan/orgs",
"repos_url": "https://api.github.com/users/3015pavan/repos",
"events_url": "https://api.github.com/users/3015pavan/events{/privacy}",
"received_events_url": "https://api.github.com/users/3015pavan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T16:36:40
| 2025-09-25T12:42:19
| 2025-09-25T12:42:19
|
NONE
| null | null | null | null |
### Feature request
The Real-Time Sentiment Dashboard is an enhancement that streams live social media content (e.g., Twitter/X,
Reddit) and classifies sentiment using Hugging Face’s pre-trained models. It visualizes public sentiment trends
across topics, regions, and time.
### Motivation
While Hugging Face Transformers offer powerful sentiment analysis models, they are typically used in static or batch-processing contexts. In today’s fast-paced digital world, there’s a growing need to analyze public sentiment as it unfolds, especially across platforms like Twitter/X, Reddit, and news feeds.
This enhancement bridges that gap by introducing a real-time dashboard that streams live content, classifies sentiment instantly, and visualizes trends interactively. It empowers developers, journalists, and researchers to monitor public opinion dynamically — whether for elections, product launches, or social movemen
### Your contribution
This contribution includes:
- A streaming pipeline using snscrape or tweepy
- Sentiment classification using Hugging Face’s pipeline("sentiment-analysis")
- A dashboard UI built with Streamlit or Dash
- Optional geo-mapping and multilingual support
- Full documentation, testing, and demo app
This enhancement demonstrates how Hugging Face models can be used in live, real-world applications, making NLP more accessible and impactful.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41136/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41135
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41135/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41135/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41135/events
|
https://github.com/huggingface/transformers/pull/41135
| 3,449,625,641
|
PR_kwDOCUB6oc6qTGij
| 41,135
|
Add Zagros and ZagrosNext model architectures to Transformers
|
{
"login": "ZagrosLLMModel",
"id": 232272715,
"node_id": "U_kgDODdgzSw",
"avatar_url": "https://avatars.githubusercontent.com/u/232272715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZagrosLLMModel",
"html_url": "https://github.com/ZagrosLLMModel",
"followers_url": "https://api.github.com/users/ZagrosLLMModel/followers",
"following_url": "https://api.github.com/users/ZagrosLLMModel/following{/other_user}",
"gists_url": "https://api.github.com/users/ZagrosLLMModel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZagrosLLMModel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZagrosLLMModel/subscriptions",
"organizations_url": "https://api.github.com/users/ZagrosLLMModel/orgs",
"repos_url": "https://api.github.com/users/ZagrosLLMModel/repos",
"events_url": "https://api.github.com/users/ZagrosLLMModel/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZagrosLLMModel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T14:11:55
| 2025-09-28T12:33:38
| 2025-09-28T12:33:38
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41135",
"html_url": "https://github.com/huggingface/transformers/pull/41135",
"diff_url": "https://github.com/huggingface/transformers/pull/41135.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41135.patch",
"merged_at": null
}
|
What does this PR do?
This PR introduces two new model architectures, Zagros and ZagrosNext, to the Transformers library. The implementation includes:
Model architecture files (modeling_zagros.py and modeling_zagros_next.py) in src/transformers/models/zagros/ and src/transformers/models/zagros_next/.
Configuration files (configuration_zagros.py and configuration_zagros_next.py) for both models.
Comprehensive tests for both models in tests/models/zagros/ and tests/models/zagros_next/.
Updated documentation in docs/source/en/ with usage examples and details for Zagros and ZagrosNext.
Both models follow the standard structure and conventions of existing Transformers models, ensuring compatibility with the library's pipelines and utilities.
Motivation and Context
The Zagros and ZagrosNext models are designed to [briefly describe the purpose, e.g., "enhance performance on specific NLP tasks with novel architectural improvements"].
These models leverage standard Transformer conventions, making them easy to integrate into existing workflows.
The implementation is fully compatible with the Transformers library and supports all standard functionalities (e.g., training, inference, and pipeline integration).
Dependencies
No additional dependencies are required beyond the standard Transformers setup.
Tested with Python 3.9+ and PyTorch 2.0+.
Before submitting
This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)?
Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
Did you write any new necessary tests?
Who can review?
@ArthurZucker (for text models)@Rocketknight1 (for pipelines and library compatibility)@stevhliu (for documentation)
Thank you for reviewing!
|
{
"login": "ZagrosLLMModel",
"id": 232272715,
"node_id": "U_kgDODdgzSw",
"avatar_url": "https://avatars.githubusercontent.com/u/232272715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZagrosLLMModel",
"html_url": "https://github.com/ZagrosLLMModel",
"followers_url": "https://api.github.com/users/ZagrosLLMModel/followers",
"following_url": "https://api.github.com/users/ZagrosLLMModel/following{/other_user}",
"gists_url": "https://api.github.com/users/ZagrosLLMModel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZagrosLLMModel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZagrosLLMModel/subscriptions",
"organizations_url": "https://api.github.com/users/ZagrosLLMModel/orgs",
"repos_url": "https://api.github.com/users/ZagrosLLMModel/repos",
"events_url": "https://api.github.com/users/ZagrosLLMModel/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZagrosLLMModel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41135/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41134
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41134/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41134/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41134/events
|
https://github.com/huggingface/transformers/pull/41134
| 3,449,522,474
|
PR_kwDOCUB6oc6qSwNY
| 41,134
|
Deprecate `half_precision_backend`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T13:45:00
| 2025-09-30T09:36:46
| 2025-09-30T09:36:45
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41134",
"html_url": "https://github.com/huggingface/transformers/pull/41134",
"diff_url": "https://github.com/huggingface/transformers/pull/41134.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41134.patch",
"merged_at": "2025-09-30T09:36:44"
}
|
# What does this PR do?
This PR deprecates `half_precision_backend` arg in Trainer. We don't really need cpu and apex backend.
For apex, they even removed the amp folder from their project since they've upstreamed everything to torch a long time ago.
Also the installation process for apex seems way too complicated, so i'm pretty sure there is no real usage.
For cpu, we were using `torch.autocast` but we don't really need that as this is supported in accelerate. By the way, the fp16 case with cpu was already handled by accelerate.
In the end, this shouldn't impact users at all. I left a deprecation message but I don't think this is really needed.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41134/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41134/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41133
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41133/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41133/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41133/events
|
https://github.com/huggingface/transformers/pull/41133
| 3,449,498,086
|
PR_kwDOCUB6oc6qSrBY
| 41,133
|
dummy commit
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T13:38:52
| 2025-09-24T14:31:48
| 2025-09-24T14:31:46
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41133",
"html_url": "https://github.com/huggingface/transformers/pull/41133",
"diff_url": "https://github.com/huggingface/transformers/pull/41133.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41133.patch",
"merged_at": "2025-09-24T14:31:46"
}
|
# What does this PR do?
dummy
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41133/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41132
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41132/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41132/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41132/events
|
https://github.com/huggingface/transformers/pull/41132
| 3,449,357,419
|
PR_kwDOCUB6oc6qSMfv
| 41,132
|
fix(SpeechT5Config): missing annotation on `inputs_to_logits_ratio` property
|
{
"login": "sw00",
"id": 2427972,
"node_id": "MDQ6VXNlcjI0Mjc5NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2427972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sw00",
"html_url": "https://github.com/sw00",
"followers_url": "https://api.github.com/users/sw00/followers",
"following_url": "https://api.github.com/users/sw00/following{/other_user}",
"gists_url": "https://api.github.com/users/sw00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sw00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sw00/subscriptions",
"organizations_url": "https://api.github.com/users/sw00/orgs",
"repos_url": "https://api.github.com/users/sw00/repos",
"events_url": "https://api.github.com/users/sw00/events{/privacy}",
"received_events_url": "https://api.github.com/users/sw00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-24T13:04:17
| 2025-09-29T07:45:27
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41132",
"html_url": "https://github.com/huggingface/transformers/pull/41132",
"diff_url": "https://github.com/huggingface/transformers/pull/41132.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41132.patch",
"merged_at": null
}
|
# What does this PR do?
Add the missing `@property` annotation on `SpeechT5Config` which causes failures on `AutomaticSpeechRecognitionPipeline`'s preprocessing steps if `chunk_length_s` is specified.
Specifically, the following line of code will fail since `getattr` will resolve to a method instead of the value:
```python
align_to = getattr(self.model.config, "inputs_to_logits_ratio", 1)
```
This stacktrace is generated on this error:
```
Traceback (most recent call last):
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/litserve/loops/simple_loops.py", line 88, in run_single_loop
y = _inject_context(
^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/litserve/loops/base.py", line 48, in _inject_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/src/omniserve/api.py", line 19, in predict
return self.model.predict(audio_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/src/omniserve/model.py", line 63, in predict
output = self.pipeline(input)
^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 275, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1459, in __call__
return next(
^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 126, in __next__
item = next(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 271, in __next__
processed = self.infer(next(self.iterator), **self.params)
^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 734, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 790, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py", line 33, in fetch
data.append(next(self.dataset_iter))
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 188, in __next__
processed = next(self.subiterator)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sett/src/model-cloud/servers/omniserve/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 450, in preprocess
chunk_len = int(round(chunk_length_s * self.feature_extractor.sampling_rate / align_to) * align_to)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~
TypeError: unsupported operand type(s) for /: 'float' and 'method'
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1 @eustlb
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41132/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41131
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41131/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41131/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41131/events
|
https://github.com/huggingface/transformers/pull/41131
| 3,449,319,508
|
PR_kwDOCUB6oc6qSEVY
| 41,131
|
[v5] More Training Args cleaning
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T12:54:30
| 2025-09-30T15:38:09
| 2025-09-30T15:38:07
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41131",
"html_url": "https://github.com/huggingface/transformers/pull/41131",
"diff_url": "https://github.com/huggingface/transformers/pull/41131.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41131.patch",
"merged_at": "2025-09-30T15:38:07"
}
|
# What does this PR do?
This PR removes EvaluationStrategy as it is deprecated
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41131/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41130
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41130/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41130/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41130/events
|
https://github.com/huggingface/transformers/pull/41130
| 3,449,297,291
|
PR_kwDOCUB6oc6qR_ku
| 41,130
|
Import Callable from collections.abc
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T12:47:57
| 2025-10-09T12:17:39
| 2025-10-09T12:12:44
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41130",
"html_url": "https://github.com/huggingface/transformers/pull/41130",
"diff_url": "https://github.com/huggingface/transformers/pull/41130.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41130.patch",
"merged_at": "2025-10-09T12:12:44"
}
|
# What does this PR do?
Python 3.10 has moved `Callable` to `collections.abc`
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41130/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41129
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41129/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41129/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41129/events
|
https://github.com/huggingface/transformers/issues/41129
| 3,449,256,650
|
I_kwDOCUB6oc7Nl3rK
| 41,129
|
tensor mismatch when finetuning a smollm3 model
|
{
"login": "codefusser",
"id": 14119586,
"node_id": "MDQ6VXNlcjE0MTE5NTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14119586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codefusser",
"html_url": "https://github.com/codefusser",
"followers_url": "https://api.github.com/users/codefusser/followers",
"following_url": "https://api.github.com/users/codefusser/following{/other_user}",
"gists_url": "https://api.github.com/users/codefusser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codefusser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codefusser/subscriptions",
"organizations_url": "https://api.github.com/users/codefusser/orgs",
"repos_url": "https://api.github.com/users/codefusser/repos",
"events_url": "https://api.github.com/users/codefusser/events{/privacy}",
"received_events_url": "https://api.github.com/users/codefusser/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-24T12:36:10
| 2025-10-22T16:48:15
| null |
NONE
| null | null | null | null |
### System Info
When performing a fine-tuning job with batch size of 4 and max steps of 1000, it errors out with some tokenizer error
```
The tokenizer has new PAD/BOS/EOS tokens that differ from the model config and generation config. The model config and generation config were aligned accordingly, being updated with the tokenizer's values. Updated tokens: {'bos_token_id': None, 'pad_token_id': None}.
0%| | 0/500 [00:00<?, ?it/s]Traceback (most recent call last):
File "/tmp/script.py", line 191, in <module>
trainer.train()
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 2328, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 2672, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/trl/trainer/sft_trainer.py", line 1189, in training_step
return super().training_step(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/transformers/trainer.py", line 4009, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.cache/uv/environments-v2/script-912247c0edd68a55/lib/python3.12/site-packages/trl/trainer/sft_trainer.py", line 1123, in compute_loss
entropy = torch.sum(per_token_entropy * attention_mask) / attention_mask.sum()
~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (4) must match the size of tensor b (8) at non-singleton dimension 0
0%| | 0/500 [00:01<?, ?it/s]
```
Please what might be wrong?
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
sft finetuning with a10-largex2 GPU
training parameters:
# Configure training
config = SFTConfig(
output_dir="./smollm3-jobs-sft",
per_device_train_batch_size=4,
learning_rate=3e-5,
max_steps=1000,
logging_steps=50,
save_steps=200,
push_to_hub=True,
hub_model_id="hubsnippetai/smollm3-jobs-sft"
)
# Train
trainer = SFTTrainer(
model=model,
train_dataset=processed_train,
args=config,
)
trainer.train()
### Expected behavior
After loading the datasets, the job should continue running while training the model without erroring out.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41129/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41128
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41128/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41128/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41128/events
|
https://github.com/huggingface/transformers/pull/41128
| 3,449,238,877
|
PR_kwDOCUB6oc6qRzIk
| 41,128
|
[v5] Remove tokenizer from Trainer
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T12:31:10
| 2025-09-30T15:42:12
| 2025-09-30T15:42:10
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41128",
"html_url": "https://github.com/huggingface/transformers/pull/41128",
"diff_url": "https://github.com/huggingface/transformers/pull/41128.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41128.patch",
"merged_at": "2025-09-30T15:42:10"
}
|
# What does this PR do?
This PR removes tokenizers from Trainer as it will be deprecated for v5. Since we removed it from the init, I think it should also make sense to remove the setter and the property method for tokenizer
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41128/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41127
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41127/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41127/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41127/events
|
https://github.com/huggingface/transformers/pull/41127
| 3,449,159,957
|
PR_kwDOCUB6oc6qRh4o
| 41,127
|
[v5] Remove train kwargs
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T12:08:51
| 2025-09-30T15:43:27
| 2025-09-30T15:43:25
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41127",
"html_url": "https://github.com/huggingface/transformers/pull/41127",
"diff_url": "https://github.com/huggingface/transformers/pull/41127.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41127.patch",
"merged_at": "2025-09-30T15:43:25"
}
|
# What does this PR do?
This PR removes train kwargs. There were used for deprecated arguments but since we are removing them, we can also safely remove the kwargs.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41127/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41126
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41126/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41126/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41126/events
|
https://github.com/huggingface/transformers/pull/41126
| 3,449,135,316
|
PR_kwDOCUB6oc6qRcgf
| 41,126
|
Add AMD developer cloud support
|
{
"login": "fan-amd",
"id": 233904592,
"node_id": "U_kgDODfEZ0A",
"avatar_url": "https://avatars.githubusercontent.com/u/233904592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fan-amd",
"html_url": "https://github.com/fan-amd",
"followers_url": "https://api.github.com/users/fan-amd/followers",
"following_url": "https://api.github.com/users/fan-amd/following{/other_user}",
"gists_url": "https://api.github.com/users/fan-amd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fan-amd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fan-amd/subscriptions",
"organizations_url": "https://api.github.com/users/fan-amd/orgs",
"repos_url": "https://api.github.com/users/fan-amd/repos",
"events_url": "https://api.github.com/users/fan-amd/events{/privacy}",
"received_events_url": "https://api.github.com/users/fan-amd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T12:02:54
| 2025-10-13T10:17:25
| 2025-10-13T10:17:25
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41126",
"html_url": "https://github.com/huggingface/transformers/pull/41126",
"diff_url": "https://github.com/huggingface/transformers/pull/41126.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41126.patch",
"merged_at": "2025-10-13T10:17:25"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add AMD developer cloud support for HF notebook
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "pagezyhf",
"id": 165770107,
"node_id": "U_kgDOCeFzew",
"avatar_url": "https://avatars.githubusercontent.com/u/165770107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pagezyhf",
"html_url": "https://github.com/pagezyhf",
"followers_url": "https://api.github.com/users/pagezyhf/followers",
"following_url": "https://api.github.com/users/pagezyhf/following{/other_user}",
"gists_url": "https://api.github.com/users/pagezyhf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pagezyhf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pagezyhf/subscriptions",
"organizations_url": "https://api.github.com/users/pagezyhf/orgs",
"repos_url": "https://api.github.com/users/pagezyhf/repos",
"events_url": "https://api.github.com/users/pagezyhf/events{/privacy}",
"received_events_url": "https://api.github.com/users/pagezyhf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41126/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41125
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41125/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41125/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41125/events
|
https://github.com/huggingface/transformers/issues/41125
| 3,449,043,023
|
I_kwDOCUB6oc7NlDhP
| 41,125
|
Confusion about Qwen3-Next’s linear attention
|
{
"login": "fanglinpu",
"id": 22862577,
"node_id": "MDQ6VXNlcjIyODYyNTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/22862577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fanglinpu",
"html_url": "https://github.com/fanglinpu",
"followers_url": "https://api.github.com/users/fanglinpu/followers",
"following_url": "https://api.github.com/users/fanglinpu/following{/other_user}",
"gists_url": "https://api.github.com/users/fanglinpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fanglinpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fanglinpu/subscriptions",
"organizations_url": "https://api.github.com/users/fanglinpu/orgs",
"repos_url": "https://api.github.com/users/fanglinpu/repos",
"events_url": "https://api.github.com/users/fanglinpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/fanglinpu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T11:37:39
| 2025-09-25T10:58:01
| 2025-09-25T10:58:01
|
NONE
| null | null | null | null |
<img width="1796" height="1470" alt="Image" src="https://github.com/user-attachments/assets/ca6d3fa3-793d-40c0-8f90-0953d40893c2" />
It seems “sequence_length” and “num_heads” are reversed here.
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41125/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41124
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41124/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41124/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41124/events
|
https://github.com/huggingface/transformers/pull/41124
| 3,448,981,272
|
PR_kwDOCUB6oc6qQ6qT
| 41,124
|
Revert "Fix DeepSpeed mixed precision precedence over Accelerate defaults"
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T11:23:15
| 2025-09-30T15:37:44
| 2025-09-30T15:37:42
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41124",
"html_url": "https://github.com/huggingface/transformers/pull/41124",
"diff_url": "https://github.com/huggingface/transformers/pull/41124.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41124.patch",
"merged_at": "2025-09-30T15:37:42"
}
|
Reverts huggingface/transformers#39856
Not the intended fix. This breaks the logic in deepspeed and makes things complicated to maintain.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41124/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41123
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41123/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41123/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41123/events
|
https://github.com/huggingface/transformers/pull/41123
| 3,448,861,677
|
PR_kwDOCUB6oc6qQhEa
| 41,123
|
[v5] Remove deprecated prediction loop
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T10:53:43
| 2025-09-30T15:43:03
| 2025-09-30T15:43:01
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41123",
"html_url": "https://github.com/huggingface/transformers/pull/41123",
"diff_url": "https://github.com/huggingface/transformers/pull/41123.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41123.patch",
"merged_at": "2025-09-30T15:43:01"
}
|
# What does this PR do?
This PR removes the legacy prediction loop that was deprecated 4 years ago. I think that it should be fine to deprecate it even without any warning for v5 which it would have been better to have. I checked files on github and no one really seemed to use the legacy one. Also `SequentialDistributedSampler` was bound to be deprecated in v5.
If we still prefer to have it for one version, happy to submit a PR for a deprecation msg.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41123/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41122
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41122/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41122/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41122/events
|
https://github.com/huggingface/transformers/pull/41122
| 3,448,815,367
|
PR_kwDOCUB6oc6qQXQP
| 41,122
|
🚨 [V5] Remove deprecated `resume_download`
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T10:40:09
| 2025-10-02T15:39:20
| 2025-10-02T14:44:35
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41122",
"html_url": "https://github.com/huggingface/transformers/pull/41122",
"diff_url": "https://github.com/huggingface/transformers/pull/41122.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41122.patch",
"merged_at": "2025-10-02T14:44:35"
}
|
# What does this PR do?
`resume_download` is a placeholder...
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41122/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41121
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41121/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41121/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41121/events
|
https://github.com/huggingface/transformers/pull/41121
| 3,448,529,880
|
PR_kwDOCUB6oc6qPYrA
| 41,121
|
fix: resolve the unexpected video frame drop issue of the InternVL model with multiple video inputs
|
{
"login": "WesKwong",
"id": 117013204,
"node_id": "U_kgDOBvl61A",
"avatar_url": "https://avatars.githubusercontent.com/u/117013204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WesKwong",
"html_url": "https://github.com/WesKwong",
"followers_url": "https://api.github.com/users/WesKwong/followers",
"following_url": "https://api.github.com/users/WesKwong/following{/other_user}",
"gists_url": "https://api.github.com/users/WesKwong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WesKwong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WesKwong/subscriptions",
"organizations_url": "https://api.github.com/users/WesKwong/orgs",
"repos_url": "https://api.github.com/users/WesKwong/repos",
"events_url": "https://api.github.com/users/WesKwong/events{/privacy}",
"received_events_url": "https://api.github.com/users/WesKwong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-24T09:25:23
| 2025-09-24T13:56:19
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41121",
"html_url": "https://github.com/huggingface/transformers/pull/41121",
"diff_url": "https://github.com/huggingface/transformers/pull/41121.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41121.patch",
"merged_at": null
}
|
# What does this PR do?
This PR addresses a bug where the InternVL preprocessor would incorrectly drop one video frame from subsequent videos in a multi-video input.
This issue was reported in: https://github.com/OpenGVLab/InternVL/issues/1178
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!--
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
-->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41121/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41120
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41120/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41120/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41120/events
|
https://github.com/huggingface/transformers/pull/41120
| 3,448,356,481
|
PR_kwDOCUB6oc6qOy4B
| 41,120
|
Add a line to make telemetry in examples more explicit
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T08:36:43
| 2025-09-26T13:22:25
| 2025-09-26T13:22:25
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41120",
"html_url": "https://github.com/huggingface/transformers/pull/41120",
"diff_url": "https://github.com/huggingface/transformers/pull/41120.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41120.patch",
"merged_at": null
}
| null |
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41120/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41119
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41119/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41119/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41119/events
|
https://github.com/huggingface/transformers/pull/41119
| 3,448,331,122
|
PR_kwDOCUB6oc6qOtOK
| 41,119
|
Separate docker images for Nvidia and AMD in benchmarking
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T08:30:01
| 2025-09-30T08:35:26
| 2025-09-29T15:03:28
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41119",
"html_url": "https://github.com/huggingface/transformers/pull/41119",
"diff_url": "https://github.com/huggingface/transformers/pull/41119.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41119.patch",
"merged_at": "2025-09-29T15:03:28"
}
|
# What does this PR do?
This PR adds the option to run the benchmarks with different Docker images and options for each hardware platform.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41119/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41118
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41118/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41118/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41118/events
|
https://github.com/huggingface/transformers/pull/41118
| 3,448,220,293
|
PR_kwDOCUB6oc6qOUl7
| 41,118
|
Fixed MXFP4 model storage issue
|
{
"login": "YangKai0616",
"id": 103475281,
"node_id": "U_kgDOBiroUQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103475281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangKai0616",
"html_url": "https://github.com/YangKai0616",
"followers_url": "https://api.github.com/users/YangKai0616/followers",
"following_url": "https://api.github.com/users/YangKai0616/following{/other_user}",
"gists_url": "https://api.github.com/users/YangKai0616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YangKai0616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangKai0616/subscriptions",
"organizations_url": "https://api.github.com/users/YangKai0616/orgs",
"repos_url": "https://api.github.com/users/YangKai0616/repos",
"events_url": "https://api.github.com/users/YangKai0616/events{/privacy}",
"received_events_url": "https://api.github.com/users/YangKai0616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T07:59:26
| 2025-09-24T12:12:34
| 2025-09-24T12:11:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41118",
"html_url": "https://github.com/huggingface/transformers/pull/41118",
"diff_url": "https://github.com/huggingface/transformers/pull/41118.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41118.patch",
"merged_at": "2025-09-24T12:11:51"
}
|
# What does this PR do?
The current **MXFP4** model throws an `TypeError` during storage. Fix this issue.
For example, running the test `RUN_SLOW=1 pytest -rA test_mxfp4.py::Mxfp4ModelTest::test_save_mxfp4` will result in the following error:
```
test_mxfp4.py::Mxfp4ModelTest::test_save_mxfp4 FAILED [100%]
================================================================================ FAILURES ================================================================================
_____________________________________________________________________ Mxfp4ModelTest.test_save_mxfp4 _____________________________________________________________________
self = <mxfp4.test_mxfp4.Mxfp4ModelTest testMethod=test_save_mxfp4>
def test_save_mxfp4(self):
"""Test saving quantized OpenAI MoE model with device_map"""
model = GptOssForCausalLM.from_pretrained(
self.model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(self.model_name)
with tempfile.TemporaryDirectory() as tmp:
# Save the model in mxfp4 format
> model.save_pretrained(tmp)
test_mxfp4.py:456:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GptOssForCausalLM(
(model): GptOssModel(
(embed_tokens): Embedding(201088, 2880, padding_idx=199999)
(layers...
(rotary_emb): GptOssRotaryEmbedding()
)
(lm_head): Linear(in_features=2880, out_features=201088, bias=False)
)
save_directory = '/tmp/tmptut51kyp', is_main_process = True, state_dict = None, save_function = <function save at 0x7fef7bd020e0>, push_to_hub = False
max_shard_size = '5GB', safe_serialization = True, variant = None, token = None, save_peft_format = True, kwargs = {}, use_auth_token = None
ignore_metadata_errors = False, _hf_peft_config_loaded = False, hf_quantizer = <transformers.quantizers.quantizer_mxfp4.Mxfp4HfQuantizer object at 0x7fee27aafbe0>
quantization_serializable = True
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
is_main_process: bool = True,
state_dict: Optional[dict] = None,
save_function: Callable = torch.save,
push_to_hub: bool = False,
max_shard_size: Union[int, str] = "5GB",
safe_serialization: bool = True,
variant: Optional[str] = None,
token: Optional[Union[str, bool]] = None,
save_peft_format: bool = True,
**kwargs,
):
"""
Save a model and its configuration file to a directory, so that it can be re-loaded using the
[`~PreTrainedModel.from_pretrained`] class method.
Arguments:
save_directory (`str` or `os.PathLike`):
Directory to which to save. Will be created if it doesn't exist.
is_main_process (`bool`, *optional*, defaults to `True`):
Whether the process calling this is the main process or not. Useful when in distributed training like
TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on
the main process to avoid race conditions.
state_dict (nested dictionary of `torch.Tensor`):
The state dictionary of the model to save. Will default to `self.state_dict()`, but can be used to only
save parts of the model or if special precautions need to be taken when recovering the state dictionary
of a model (like when using model parallelism).
save_function (`Callable`):
The function to use to save the state dictionary. Useful on distributed training like TPUs when one
need to replace `torch.save` by another method.
push_to_hub (`bool`, *optional*, defaults to `False`):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
max_shard_size (`int` or `str`, *optional*, defaults to `"5GB"`):
The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size
lower than this size. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`).
We default it to 5GB in order for models to be able to run easily on free-tier google colab instances
without CPU OOM issues.
<Tip warning={true}>
If a single weight of the model is bigger than `max_shard_size`, it will be in its own checkpoint shard
which will be bigger than `max_shard_size`.
</Tip>
safe_serialization (`bool`, *optional*, defaults to `True`):
Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
the token generated when running `hf auth login` (stored in `~/.huggingface`).
save_peft_format (`bool`, *optional*, defaults to `True`):
For backward compatibility with PEFT library, in case adapter weights are attached to the model, all
keys of the state dict of adapters needs to be prepended with `base_model.model`. Advanced users can
disable this behaviours by setting `save_peft_format` to `False`.
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
use_auth_token = kwargs.pop("use_auth_token", None)
ignore_metadata_errors = kwargs.pop("ignore_metadata_errors", False)
if use_auth_token is not None:
warnings.warn(
"The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.",
FutureWarning,
)
if token is not None:
raise ValueError(
"`token` and `use_auth_token` are both specified. Please set only the argument `token`."
)
token = use_auth_token
if token is not None:
kwargs["token"] = token
_hf_peft_config_loaded = getattr(self, "_hf_peft_config_loaded", False)
hf_quantizer = getattr(self, "hf_quantizer", None)
quantization_serializable = (
hf_quantizer is not None
and isinstance(hf_quantizer, HfQuantizer)
and hf_quantizer.is_serializable(safe_serialization=safe_serialization)
)
if hf_quantizer is not None and not _hf_peft_config_loaded and not quantization_serializable:
raise ValueError(
f"The model is quantized with {hf_quantizer.quantization_config.quant_method} and is not serializable - check out the warnings from"
" the logger on the traceback to understand the reason why the quantized model is not serializable."
)
if "save_config" in kwargs:
warnings.warn(
"`save_config` is deprecated and will be removed in v5 of Transformers. Use `is_main_process` instead."
)
is_main_process = kwargs.pop("save_config")
if safe_serialization and not is_safetensors_available():
raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.")
# we need to check against tp_size, not tp_plan, as tp_plan is substituted to the class one
if self._tp_size is not None and not is_huggingface_hub_greater_or_equal("0.31.4"):
raise ImportError(
"Saving a model with tensor parallelism requires `huggingface_hub` version 0.31.4 or higher."
)
if os.path.isfile(save_directory):
logger.error(f"Provided path ({save_directory}) should be a directory, not a file")
return
os.makedirs(save_directory, exist_ok=True)
if push_to_hub:
commit_message = kwargs.pop("commit_message", None)
repo_id = kwargs.pop("repo_id", save_directory.split(os.path.sep)[-1])
create_pr = kwargs.pop("create_pr", False)
repo_id = self._create_repo(repo_id, **kwargs)
files_timestamps = self._get_files_timestamps(save_directory)
metadata = {}
if hf_quantizer is not None:
> state_dict, metadata = hf_quantizer.get_state_dict_and_metadata(self, safe_serialization)
E TypeError: Mxfp4HfQuantizer.get_state_dict_and_metadata() takes 2 positional arguments but 3 were given
../../../src/transformers/modeling_utils.py:3918: TypeError
-------------------------------------------------------------------------- Captured stderr call --------------------------------------------------------------------------
`torch_dtype` is deprecated! Use `dtype` instead!
Fetching 41 files: 100%|██████████| 41/41 [00:00<00:00, 11247.72it/s]
Fetching 41 files: 100%|██████████| 41/41 [00:00<00:00, 10037.15it/s]
Loading checkpoint shards: 100%|██████████| 3/3 [00:07<00:00, 2.38s/it]
======================================================================== short test summary info =========================================================================
FAILED test_mxfp4.py::Mxfp4ModelTest::test_save_mxfp4 - TypeError: Mxfp4HfQuantizer.get_state_dict_and_metadata() takes 2 positional arguments but 3 were given
```
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41118/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41117
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41117/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41117/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41117/events
|
https://github.com/huggingface/transformers/pull/41117
| 3,448,071,969
|
PR_kwDOCUB6oc6qN2rf
| 41,117
|
[XPU] Add MXFP4 support for XPU
|
{
"login": "YangKai0616",
"id": 103475281,
"node_id": "U_kgDOBiroUQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103475281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangKai0616",
"html_url": "https://github.com/YangKai0616",
"followers_url": "https://api.github.com/users/YangKai0616/followers",
"following_url": "https://api.github.com/users/YangKai0616/following{/other_user}",
"gists_url": "https://api.github.com/users/YangKai0616/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YangKai0616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YangKai0616/subscriptions",
"organizations_url": "https://api.github.com/users/YangKai0616/orgs",
"repos_url": "https://api.github.com/users/YangKai0616/repos",
"events_url": "https://api.github.com/users/YangKai0616/events{/privacy}",
"received_events_url": "https://api.github.com/users/YangKai0616/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T07:20:23
| 2025-09-29T10:10:41
| 2025-09-29T10:10:41
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41117",
"html_url": "https://github.com/huggingface/transformers/pull/41117",
"diff_url": "https://github.com/huggingface/transformers/pull/41117.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41117.patch",
"merged_at": "2025-09-29T10:10:41"
}
|
# What does this PR do?
Add **MXFP4** quantization support for **XPU** to support loading **GPT-OSS** models in MXFP4 format on XPU.
1. Add MXFP4 support for XPU
2. Improve test file `tests/quantization/mxfp4/test_mxfp4.py` to support XPU testing
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41117/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41116
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41116/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41116/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41116/events
|
https://github.com/huggingface/transformers/pull/41116
| 3,448,036,305
|
PR_kwDOCUB6oc6qNwH6
| 41,116
|
Add MiniCPM3
|
{
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-24T07:11:18
| 2025-10-09T16:19:50
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41116",
"html_url": "https://github.com/huggingface/transformers/pull/41116",
"diff_url": "https://github.com/huggingface/transformers/pull/41116.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41116.patch",
"merged_at": null
}
|
# What does this PR do?
This PR introduces support for the **MiniCPM3** model architecture, developed by OpenBMB. MiniCPM3 is a powerful, lightweight language model (MLA) with a dense architecture that demonstrates impressive performance in both English and Chinese, comparable to much larger models.
The primary motivation is to add a standard implementation for MLA dense models to the `transformers` library, which is currently missing. This addition will allow the community to easily use, fine-tune, and build upon MiniCPM3 for a variety of tasks, including function calling, code interpretation, and complex chat interactions.
This PR includes:
- The `MiniCpm3Config` configuration class.
- The `MiniCpm3Model` and `MiniCpm3ForCausalLM` modeling files.
- Integration into the `AutoModel` classes.
- Comprehensive docstrings and documentation updates.
- Necessary unit tests to ensure correctness and functionality.
The model uses the existing `LlamaTokenizer`, so no new tokenizer files are added.
Fixes #41115
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41116/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41115
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41115/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41115/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41115/events
|
https://github.com/huggingface/transformers/issues/41115
| 3,448,019,756
|
I_kwDOCUB6oc7NhJss
| 41,115
|
Add Model Architecture for MiniCPM3
|
{
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-24T07:07:47
| 2025-09-24T07:07:47
| null |
CONTRIBUTOR
| null | null | null | null |
### Model description
This is a feature request to add a new model architecture for MiniCPM3, a powerful, lightweight language model.
MiniCPM3 is the third generation of the MiniCPM series. It demonstrates performance comparable to or exceeding many 7B-9B models, despite its smaller size. It excels in both English and Chinese, with advanced capabilities like function calling and code interpretation. The model is built on the Transformer architecture and introduces several innovative features to enable its impressive performance.
Adding a standard architecture for MLA dense models like MiniCPM3 would be a valuable addition to the transformers library.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
https://huggingface.co/openbmb/MiniCPM3-4B
https://github.com/OpenBMB/MiniCPM/
https://arxiv.org/abs/2404.06395
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41115/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41114
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41114/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41114/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41114/events
|
https://github.com/huggingface/transformers/pull/41114
| 3,447,176,585
|
PR_kwDOCUB6oc6qK6Gv
| 41,114
|
Add language specifiers to code blocks of markdown files
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T01:16:57
| 2025-09-26T00:27:34
| 2025-09-25T17:29:57
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41114",
"html_url": "https://github.com/huggingface/transformers/pull/41114",
"diff_url": "https://github.com/huggingface/transformers/pull/41114.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41114.patch",
"merged_at": "2025-09-25T17:29:57"
}
|
# What does this PR do?
As the title says. Adding the language specifiers allows better rendering of content.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41114/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41113
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41113/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41113/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41113/events
|
https://github.com/huggingface/transformers/pull/41113
| 3,447,113,533
|
PR_kwDOCUB6oc6qKsm0
| 41,113
|
Fix broken `` expressions in markdown files
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-24T00:48:38
| 2025-09-24T11:34:57
| 2025-09-24T11:34:12
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41113",
"html_url": "https://github.com/huggingface/transformers/pull/41113",
"diff_url": "https://github.com/huggingface/transformers/pull/41113.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41113.patch",
"merged_at": "2025-09-24T11:34:12"
}
|
# What does this PR do?
As the title says.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41113/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41112
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41112/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41112/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41112/events
|
https://github.com/huggingface/transformers/pull/41112
| 3,446,467,889
|
PR_kwDOCUB6oc6qImAm
| 41,112
|
Add FastVLM
|
{
"login": "kamila-chay",
"id": 201148875,
"node_id": "U_kgDOC_1Jyw",
"avatar_url": "https://avatars.githubusercontent.com/u/201148875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamila-chay",
"html_url": "https://github.com/kamila-chay",
"followers_url": "https://api.github.com/users/kamila-chay/followers",
"following_url": "https://api.github.com/users/kamila-chay/following{/other_user}",
"gists_url": "https://api.github.com/users/kamila-chay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamila-chay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamila-chay/subscriptions",
"organizations_url": "https://api.github.com/users/kamila-chay/orgs",
"repos_url": "https://api.github.com/users/kamila-chay/repos",
"events_url": "https://api.github.com/users/kamila-chay/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamila-chay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-23T19:45:16
| 2025-10-22T13:05:02
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41112",
"html_url": "https://github.com/huggingface/transformers/pull/41112",
"diff_url": "https://github.com/huggingface/transformers/pull/41112.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41112.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds FastVLM from Apple. The model's architecture is very similar to LlaVA, the main difference is that it uses a very fast hybrid encoder called FastViTHD. Timm's FastViT implementation is used and the LlaVA modality connector has been slightly modified.
Addresses (https://github.com/huggingface/transformers/issues/38765)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp @ariG23498
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41112/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41112/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41111
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41111/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41111/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41111/events
|
https://github.com/huggingface/transformers/pull/41111
| 3,446,349,454
|
PR_kwDOCUB6oc6qIMqQ
| 41,111
|
Support loading LFM2 GGUF
|
{
"login": "HaroldBenoit",
"id": 60629420,
"node_id": "MDQ6VXNlcjYwNjI5NDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/60629420?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaroldBenoit",
"html_url": "https://github.com/HaroldBenoit",
"followers_url": "https://api.github.com/users/HaroldBenoit/followers",
"following_url": "https://api.github.com/users/HaroldBenoit/following{/other_user}",
"gists_url": "https://api.github.com/users/HaroldBenoit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaroldBenoit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaroldBenoit/subscriptions",
"organizations_url": "https://api.github.com/users/HaroldBenoit/orgs",
"repos_url": "https://api.github.com/users/HaroldBenoit/repos",
"events_url": "https://api.github.com/users/HaroldBenoit/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaroldBenoit/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T18:58:43
| 2025-09-24T10:18:14
| 2025-09-24T10:17:41
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41111",
"html_url": "https://github.com/huggingface/transformers/pull/41111",
"diff_url": "https://github.com/huggingface/transformers/pull/41111.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41111.patch",
"merged_at": "2025-09-24T10:17:41"
}
|
# What does this PR do?
Currently, GGUF versions of LFM2 models raises "GGUF model with architecture lfm2 is not supported yet" error. This PR resolves this issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@SunMarc @MekkCyber
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41111/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41110
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41110/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41110/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41110/events
|
https://github.com/huggingface/transformers/pull/41110
| 3,446,251,276
|
PR_kwDOCUB6oc6qH3tu
| 41,110
|
[docs] Fix links
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T18:20:39
| 2025-09-30T16:08:17
| 2025-09-30T06:53:07
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41110",
"html_url": "https://github.com/huggingface/transformers/pull/41110",
"diff_url": "https://github.com/huggingface/transformers/pull/41110.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41110.patch",
"merged_at": "2025-09-30T06:53:07"
}
|
Fixes broken links in the Serving docs
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41110/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41110/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41109
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41109/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41109/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41109/events
|
https://github.com/huggingface/transformers/pull/41109
| 3,446,187,823
|
PR_kwDOCUB6oc6qHqDK
| 41,109
|
Remove bad test skips
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T17:58:29
| 2025-09-23T18:39:31
| 2025-09-23T18:39:29
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41109",
"html_url": "https://github.com/huggingface/transformers/pull/41109",
"diff_url": "https://github.com/huggingface/transformers/pull/41109.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41109.patch",
"merged_at": "2025-09-23T18:39:28"
}
|
# What does this PR do?
Here we go again...
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41109/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41108
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41108/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41108/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41108/events
|
https://github.com/huggingface/transformers/issues/41108
| 3,445,967,543
|
I_kwDOCUB6oc7NZUq3
| 41,108
|
`predict_step` in Trainer should pass `num_items_in_batch`
|
{
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2155169140,
"node_id": "MDU6TGFiZWwyMTU1MTY5MTQw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/trainer",
"name": "trainer",
"color": "2ef289",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T16:34:42
| 2025-09-30T09:45:18
| 2025-09-30T09:45:18
|
CONTRIBUTOR
| null | null | null | null |
### Feature request
`predict_step` in `Trainer.py` doesn't currently pass the `num_items_in_batch` to the `compute_loss` function.
https://github.com/huggingface/transformers/blob/869735d37d0f929311ac6611728c482a4414ba8c/src/transformers/trainer.py#L4900
This seems to be misaligned because the `training_step` function does https://github.com/huggingface/transformers/blob/869735d37d0f929311ac6611728c482a4414ba8c/src/transformers/trainer.py#L4019
### Motivation
The Trainer's `training_step` function uses `get_batch_samples` to calculate the `num_items_in_batch` which is used to scale the loss.
However with the `predict_step` not passing this value, a user with a custom loss function has to account for `num_items_in_batch` being null at eval time but not train time, which is a bit confusing. Ensuring that both train and predict steps calculate `num_items_in_batch` the same way ensures accurate logging and comparison of loss metrics.
### Your contribution
I'm happy to submit a PR for this unless there's a strong reason as to why the `predict_step` shouldn't be passing the `num_items_in_batch` to `compute_loss`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41108/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41107
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41107/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41107/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41107/events
|
https://github.com/huggingface/transformers/pull/41107
| 3,445,611,865
|
PR_kwDOCUB6oc6qFuk_
| 41,107
|
🚨 [v5] Remove SinkCache
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T14:48:54
| 2025-10-01T13:53:13
| 2025-10-01T13:46:55
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41107",
"html_url": "https://github.com/huggingface/transformers/pull/41107",
"diff_url": "https://github.com/huggingface/transformers/pull/41107.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41107.patch",
"merged_at": "2025-10-01T13:46:55"
}
|
# What does this PR do?
It could be removed in V5.
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41107/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41106
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41106/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41106/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41106/events
|
https://github.com/huggingface/transformers/pull/41106
| 3,445,546,357
|
PR_kwDOCUB6oc6qFgns
| 41,106
|
Fix `_get_test_info` for inherited tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T14:31:49
| 2025-09-23T17:35:26
| 2025-09-23T17:35:24
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41106",
"html_url": "https://github.com/huggingface/transformers/pull/41106",
"diff_url": "https://github.com/huggingface/transformers/pull/41106.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41106.patch",
"merged_at": "2025-09-23T17:35:24"
}
|
# What does this PR do?
Fix `_get_test_info` for inherited tests. Currently, there are 2 issues:
- when a test class inherits from a base class, say `VaultGemmaModelTest` is from `CausalLMModelTest`, and a test is not overwritten in the subclass (so only defined in `CausalLMModelTest` here), the current way to identify the test stack frame object is not working, as the frame will point to the file of the base class (`CausalLMModelTest`), not the subclass `VaultGemmaModelTest` (which is the one given by `PYTEST_CURRENT_TEST`).
- [A failed job run](https://github.com/huggingface/transformers/actions/runs/17923228517/job/50963413195)
- error: `E UnboundLocalError: local variable 'tb' referenced before assignment`
- This PR uses a more reliable way.
- When a patched method is called within `with self.assertRaises(AssertionError)`, we should raise it (if it happens) instead of record and raise in a later stage (insider tearDown), otherwise we will get error about missing expected AssertionError.
- I don't find a clean and short way to identify if we are inside such context.
- I decide, for now, to just raise if we are running inside Github Actions or CircleCI environment, so we are not breaking the workflow runs.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41106/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41105
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41105/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41105/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41105/events
|
https://github.com/huggingface/transformers/pull/41105
| 3,445,527,272
|
PR_kwDOCUB6oc6qFch9
| 41,105
|
Fix is_torch_neuroncore_available
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-23T14:26:32
| 2025-09-25T15:27:42
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41105",
"html_url": "https://github.com/huggingface/transformers/pull/41105",
"diff_url": "https://github.com/huggingface/transformers/pull/41105.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41105.patch",
"merged_at": null
}
|
# What does this PR do?
The argument is ignore.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41105/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41104
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41104/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41104/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41104/events
|
https://github.com/huggingface/transformers/pull/41104
| 3,445,525,071
|
PR_kwDOCUB6oc6qFcD1
| 41,104
|
docs: Fix Tool Use links and remove dead RAG links
|
{
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T14:25:55
| 2025-09-23T16:18:50
| 2025-09-23T16:18:50
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41104",
"html_url": "https://github.com/huggingface/transformers/pull/41104",
"diff_url": "https://github.com/huggingface/transformers/pull/41104.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41104.patch",
"merged_at": "2025-09-23T16:18:49"
}
|
# What does this PR do?
* Fixes links to Tool Use docs in `apply_chat_template` docstring
* Corrects grammar in chat_extras.md
* Removes dead links to a RAG section that used to be part of chat_extras.md
* Ran `make style` and committed associated fixes.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1 @stevhliu
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41104/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41103
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41103/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41103/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41103/events
|
https://github.com/huggingface/transformers/pull/41103
| 3,445,478,109
|
PR_kwDOCUB6oc6qFSGJ
| 41,103
|
Simplify and improve model loading logic
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T14:13:48
| 2025-09-29T12:33:50
| 2025-09-25T15:28:27
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41103",
"html_url": "https://github.com/huggingface/transformers/pull/41103",
"diff_url": "https://github.com/huggingface/transformers/pull/41103.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41103.patch",
"merged_at": "2025-09-25T15:28:27"
}
|
# What does this PR do?
Quite a bit of improvements in this PR to keep making things simpler:
- completely remove `offload_state_dict`: it was added several years ago to avoid loading 2x memory on cpu between the model and the state dict -> this is no longer needed from some time as the model is loaded on meta device, then params are loaded one after the other -> removes quite a bit of convoluted logic
- remove distinction between `model_to_load` and `model` in `_load_pretrained_model` -> this was there originally and is only confusing -> removes quite a bit of logic here as well
- rename `check_quantized_param() -> bool` to `param_needs_quantization() -> bool` in the quantizers -> muuuuch more self-explanatory, the old name was quite hard to understand
- removes `logger.info` messages -> if everything went normal, no need to tell me... Only tell me what went wrong
- rework the `mismatched_shapes` tests -> they were redundant and quite badly written, with a ton of unneeded exceptions and skips
- several other improvements in terms of logic/readability, as well as performances during init
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41103/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41102
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41102/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41102/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41102/events
|
https://github.com/huggingface/transformers/pull/41102
| 3,445,365,663
|
PR_kwDOCUB6oc6qE6NP
| 41,102
|
More typing fixes
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T13:43:05
| 2025-09-29T13:21:47
| 2025-09-29T13:11:53
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41102",
"html_url": "https://github.com/huggingface/transformers/pull/41102",
"diff_url": "https://github.com/huggingface/transformers/pull/41102.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41102.patch",
"merged_at": "2025-09-29T13:11:53"
}
|
# What does this PR do?
This PR adds types to parameters and remove unnecessary `noqa`.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41102/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41100
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41100/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41100/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41100/events
|
https://github.com/huggingface/transformers/pull/41100
| 3,445,078,608
|
PR_kwDOCUB6oc6qD8lr
| 41,100
|
Format empty lines and white space in markdown files.
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T12:23:12
| 2025-09-24T00:13:49
| 2025-09-23T23:20:01
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41100",
"html_url": "https://github.com/huggingface/transformers/pull/41100",
"diff_url": "https://github.com/huggingface/transformers/pull/41100.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41100.patch",
"merged_at": "2025-09-23T23:20:01"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
They are formatted by `markdownlint-cli2`. These are simple changes, before it's possible to detect more issues in documentation.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41100/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41099
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41099/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41099/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41099/events
|
https://github.com/huggingface/transformers/pull/41099
| 3,445,036,145
|
PR_kwDOCUB6oc6qDzmz
| 41,099
|
Fix the error where a keyword argument appearing before *args
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T12:12:01
| 2025-09-24T11:30:44
| 2025-09-24T11:27:37
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41099",
"html_url": "https://github.com/huggingface/transformers/pull/41099",
"diff_url": "https://github.com/huggingface/transformers/pull/41099.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41099.patch",
"merged_at": "2025-09-24T11:27:37"
}
|
# What does this PR do?
While it is valid, this is confusing and it may cause subtle bugs.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41099/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41098
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41098/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41098/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41098/events
|
https://github.com/huggingface/transformers/pull/41098
| 3,444,917,642
|
PR_kwDOCUB6oc6qDaFk
| 41,098
|
Add RMSNorm kernels for npu
|
{
"login": "zheliuyu",
"id": 190869220,
"node_id": "U_kgDOC2Bu5A",
"avatar_url": "https://avatars.githubusercontent.com/u/190869220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zheliuyu",
"html_url": "https://github.com/zheliuyu",
"followers_url": "https://api.github.com/users/zheliuyu/followers",
"following_url": "https://api.github.com/users/zheliuyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zheliuyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zheliuyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zheliuyu/subscriptions",
"organizations_url": "https://api.github.com/users/zheliuyu/orgs",
"repos_url": "https://api.github.com/users/zheliuyu/repos",
"events_url": "https://api.github.com/users/zheliuyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zheliuyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T11:37:52
| 2025-10-13T11:24:12
| 2025-10-13T11:24:12
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41098",
"html_url": "https://github.com/huggingface/transformers/pull/41098",
"diff_url": "https://github.com/huggingface/transformers/pull/41098.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41098.patch",
"merged_at": null
}
|
# What does this PR do?
Add support for `kernels-ext-npu/RMSNorm` acceleration on npu.
## Basework
- Discussed: https://github.com/huggingface/transformers/issues/39105
- Supported kernels:
- https://github.com/huggingface/kernels/pull/146
- https://github.com/huggingface/kernels/pull/155
## Test
### Prepare the env
```
git clone https://github.com/huggingface/kernels
pip install -e kernels
git clone https://github.com/zheliuyu/transformers-add-kernels
pip install -e transformers-add-kernels
```
### Test script
- Set `logging.basicConfig(level=logging.DEBUG)` to observe if `"kernels-ext-npu/RMSNorm"` takes effect.
- Set `use_kernels=True`
- Set `use_kernel_forward_from_hub("RMSNorm")` on [modeling_qwen3.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen3/modeling_qwen3.py#L49)
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import logging
logging.basicConfig(level=logging.DEBUG)
model_name = "Qwen/Qwen3-0.6B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
use_kernels=True,
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
### Output
```
INFO:root:Using layer `RMSNorm` from repo `kernels-ext-npu/RMSNorm` (revision: main), layer `RMSNorm`
DEBUG:root:kernelize mode: Mode.INFERENCE, repo mode: Mode.FALLBACK
=========repeat x N=============
INFO:root:Using layer `RMSNorm` from repo `kernels-ext-npu/RMSNorm` (revision: main), layer `RMSNorm`
DEBUG:root:kernelize mode: Mode.INFERENCE, repo mode: Mode.FALLBACK
thinking content: <think>
Okay, the user wants a short introduction to a large language model. Let me start by recalling what I know. Large language models are AI systems that can understand and generate human language. They're used in various fields like translation, writing, and customer service. I should mention their capabilities and applications.
Wait, should I include specific examples? Maybe mention something like translating from Spanish to English or helping with customer service. Also, emphasize their adaptability to different languages and topics. Oh, and maybe touch on their training data sources to add depth. Let me check if I'm covering all key points without being too technical. Keep it concise but informative. Alright, that should do it.
</think>
content: A large language model (LLM) is an advanced AI system capable of understanding and generating human language, enabling tasks such as translation, text generation, and customer service. These models are trained on vast datasets to learn patterns and nuances in language, allowing them to interact with users in natural and meaningful ways. They are used across industries for tasks ranging from content creation to complex problem-solving.
```
### Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you write any new necessary tests?
@ArthurZucker Ready for review.
|
{
"login": "zheliuyu",
"id": 190869220,
"node_id": "U_kgDOC2Bu5A",
"avatar_url": "https://avatars.githubusercontent.com/u/190869220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zheliuyu",
"html_url": "https://github.com/zheliuyu",
"followers_url": "https://api.github.com/users/zheliuyu/followers",
"following_url": "https://api.github.com/users/zheliuyu/following{/other_user}",
"gists_url": "https://api.github.com/users/zheliuyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zheliuyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zheliuyu/subscriptions",
"organizations_url": "https://api.github.com/users/zheliuyu/orgs",
"repos_url": "https://api.github.com/users/zheliuyu/repos",
"events_url": "https://api.github.com/users/zheliuyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zheliuyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41098/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41097
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41097/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41097/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41097/events
|
https://github.com/huggingface/transformers/pull/41097
| 3,444,758,365
|
PR_kwDOCUB6oc6qC3Ti
| 41,097
|
Delay and probably avoid unnecessary graph breaks in _upad_input of modeling_flash_attention_utils.py
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-23T10:51:04
| 2025-09-29T12:47:26
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41097",
"html_url": "https://github.com/huggingface/transformers/pull/41097",
"diff_url": "https://github.com/huggingface/transformers/pull/41097.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41097.patch",
"merged_at": null
}
|
# What does this PR do?
It works by refactoring `_get_unpad_data`
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41097/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41096
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41096/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41096/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41096/events
|
https://github.com/huggingface/transformers/pull/41096
| 3,444,461,752
|
PR_kwDOCUB6oc6qB2BG
| 41,096
|
Remove softmax_backward_data
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:34:18
| 2025-09-23T11:41:07
| 2025-09-23T11:39:42
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41096",
"html_url": "https://github.com/huggingface/transformers/pull/41096",
"diff_url": "https://github.com/huggingface/transformers/pull/41096.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41096.patch",
"merged_at": null
}
|
# What does this PR do?
Remove softmax_backward_data and its references.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41096/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41095
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41095/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41095/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41095/events
|
https://github.com/huggingface/transformers/pull/41095
| 3,444,427,825
|
PR_kwDOCUB6oc6qBuv8
| 41,095
|
Add LLaVA-OneVision-1.5 model and related configurations
|
{
"login": "g1050",
"id": 46085963,
"node_id": "MDQ6VXNlcjQ2MDg1OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/46085963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g1050",
"html_url": "https://github.com/g1050",
"followers_url": "https://api.github.com/users/g1050/followers",
"following_url": "https://api.github.com/users/g1050/following{/other_user}",
"gists_url": "https://api.github.com/users/g1050/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g1050/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g1050/subscriptions",
"organizations_url": "https://api.github.com/users/g1050/orgs",
"repos_url": "https://api.github.com/users/g1050/repos",
"events_url": "https://api.github.com/users/g1050/events{/privacy}",
"received_events_url": "https://api.github.com/users/g1050/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-23T09:23:53
| 2025-10-28T08:44:26
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41095",
"html_url": "https://github.com/huggingface/transformers/pull/41095",
"diff_url": "https://github.com/huggingface/transformers/pull/41095.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41095.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds **LLaVA-OneVision-1.5**
Fixes #41081
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41095/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41094
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41094/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41094/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41094/events
|
https://github.com/huggingface/transformers/pull/41094
| 3,444,389,826
|
PR_kwDOCUB6oc6qBmqA
| 41,094
|
Update team member list for some CI workflows
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:13:52
| 2025-09-23T09:48:41
| 2025-09-23T09:48:41
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41094",
"html_url": "https://github.com/huggingface/transformers/pull/41094",
"diff_url": "https://github.com/huggingface/transformers/pull/41094.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41094.patch",
"merged_at": "2025-09-23T09:48:41"
}
|
# What does this PR do?
Update team member list for some CI workflows
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41094/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41094/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41093
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41093/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41093/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41093/events
|
https://github.com/huggingface/transformers/issues/41093
| 3,444,381,708
|
I_kwDOCUB6oc7NTRgM
| 41,093
|
IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0
|
{
"login": "wyn1015",
"id": 201194623,
"node_id": "U_kgDOC_38fw",
"avatar_url": "https://avatars.githubusercontent.com/u/201194623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wyn1015",
"html_url": "https://github.com/wyn1015",
"followers_url": "https://api.github.com/users/wyn1015/followers",
"following_url": "https://api.github.com/users/wyn1015/following{/other_user}",
"gists_url": "https://api.github.com/users/wyn1015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wyn1015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wyn1015/subscriptions",
"organizations_url": "https://api.github.com/users/wyn1015/orgs",
"repos_url": "https://api.github.com/users/wyn1015/repos",
"events_url": "https://api.github.com/users/wyn1015/events{/privacy}",
"received_events_url": "https://api.github.com/users/wyn1015/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:11:35
| 2025-10-22T07:26:02
| 2025-10-06T08:56:31
|
NONE
| null | null | null | null |
### System Info
transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py”,在 get_rope_index 中 [rank3]: input_ids = input_ids[attention_mask[i] == 1] IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0
transformers==4.49.0 transformers==4.51.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Parameter Offload: Total persistent parameters: 848896 in 368 params
--- DEBUGGING prompt_inputs ---
Key: input_ids, Shape: torch.Size([1, 1411])
Key: attention_mask, Shape: torch.Size([1, 1411])
Key: pixel_values, Shape: torch.Size([5476, 1176])
Key: image_grid_thw, Shape: torch.Size([1, 3])
0%| | 0/4 [00:00<?, ?it/s]--- DEBUGGING prompt_inputs ---
Key: input_ids, Shape: torch.Size([1, 1402])
Key: attention_mask, Shape: torch.Size([1, 1402])
Key: pixel_values, Shape: torch.Size([5476, 1176])
Key: image_grid_thw, Shape: torch.Size([1, 3])
`generation_config` default values have been modified to match model-specific defaults: {'use_cache': False, 'temperature': 1e-06, 'repetition_penalty': 1.05, 'bos_token_id': 151643, 'eos_token_id': [151645, 151643]}. If this is not desired, please set these values explicitly.
`generation_config` default values have been modified to match model-specific defaults: {'use_cache': False, 'temperature': 1e-06, 'repetition_penalty': 1.05, 'bos_token_id': 151643, 'eos_token_id': [151645, 151643]}. If this is not desired, please set these values explicitly.
/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
[rank0]: Traceback (most recent call last):
[rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/grpo.py", line 337, in <module>
[rank0]: main(script_args, training_args, model_args)
[rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/grpo.py", line 326, in main
[rank0]: trainer.train()
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 2237, in train
[rank0]: return inner_training_loop(
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 2578, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/trainer.py", line 3792, in training_step
[rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]: File "/ unified/UnifiedReward-main/UnifiedReward-Think/src/open_r1/trainer/grpo_trainer.py", line 495, in compute_loss
[rank0]: prompt_completion_ids = unwrapped_model.generate(**prompt_inputs, generation_config=self.generation_config)
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/generation/utils.py", line 2633, in generate
[rank0]: result = self._sample(
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/generation/utils.py", line 3607, in _sample
[rank0]: model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1561, in prepare_inputs_for_generation
[rank0]: vision_positions, rope_deltas = self.model.get_rope_index(
[rank0]: File "/ conda-envs/searchlm_cu121/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1057, in get_rope_index
[rank0]: input_ids = input_ids[attention_mask[i] == 1]
[rank0]: IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0
0%| | 0/4 [00:06<?, ?it/s]
[2025-09-23 05:40:36,349] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 1295796
[2025-09-23 05:40:36,350] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 1295797
[2025-09-23 05:40:36,565] [ERROR] [launch.py:325:sigkill_handler] ['/ conda-envs/searchlm_cu121/bin/python3.10', '-u', 'src/open_r1/grpo.py', '--local_rank=1', '--deepspeed', 'scripts/zero3.json', '--ddp_timeout', '180000000', '--output_dir', './checkpoints/UnifiedReward-Think-qwen-GRPO', '--model_name_or_path', '/ model/UnifiedReward-qwen-7b', '--dataset_name', '/ unified/UnifiedReward-main/UnifiedReward-Think/dataset/HPD/HPD_train_data_qwen1.json', '--max_prompt_length', '2048', '--max_completion_length', '1024', '--num_generations', '2', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '1', '--learning_rate', '1e-6', '--logging_steps', '1', '--bf16', 'True', '--torch_dtype', 'bfloat16', '--report_to', 'none', '--gradient_checkpointing', 'true', '--attn_implementation', 'eager', '--max_pixels', '147456', '--save_steps', '40', '--save_total_limit', '8', '--save_only_model', 'false', '--num_train_epochs', '2'] exits with return code = 1
### Expected behavior
It seems to be a tranformers version issue. Could you help take a look
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41093/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41092
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41092/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41092/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41092/events
|
https://github.com/huggingface/transformers/issues/41092
| 3,444,380,590
|
I_kwDOCUB6oc7NTROu
| 41,092
|
Failed to run `Qwen3-235B-A22B` with `tp_size=8`
|
{
"login": "Hermit-w",
"id": 129869143,
"node_id": "U_kgDOB72lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/129869143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hermit-w",
"html_url": "https://github.com/Hermit-w",
"followers_url": "https://api.github.com/users/Hermit-w/followers",
"following_url": "https://api.github.com/users/Hermit-w/following{/other_user}",
"gists_url": "https://api.github.com/users/Hermit-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hermit-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hermit-w/subscriptions",
"organizations_url": "https://api.github.com/users/Hermit-w/orgs",
"repos_url": "https://api.github.com/users/Hermit-w/repos",
"events_url": "https://api.github.com/users/Hermit-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hermit-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:11:15
| 2025-10-01T12:53:31
| 2025-09-23T15:44:43
|
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.1
- Platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.17.5
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: NVIDIA H800
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the following cmd to run the script.
```bash
torchrun --nproc-per-node=8 run_model.py
```
The script is:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
if __name__ == '__main__':
torch.distributed.init_process_group("nccl")
model_name = "Qwen/Qwen3-235B-A22B"
model_path = model_name
tp_size = 8
model = AutoModelForCausalLM.from_pretrained(model_path, tp_plan="auto", tp_size=tp_size, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_path)
prompt = "Hello, please introduce yourself.\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model(
**inputs,
use_cache=True,
)
torch.distributed.destroy_process_group()
```
And it will cause the following error, which seems to be the problem about shape related to tp_plan.
<img width="1524" height="149" alt="Image" src="https://github.com/user-attachments/assets/e7b46744-3780-4800-8f72-2288a3baf35c" />
### Expected behavior
Running model without error.
|
{
"login": "Hermit-w",
"id": 129869143,
"node_id": "U_kgDOB72lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/129869143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hermit-w",
"html_url": "https://github.com/Hermit-w",
"followers_url": "https://api.github.com/users/Hermit-w/followers",
"following_url": "https://api.github.com/users/Hermit-w/following{/other_user}",
"gists_url": "https://api.github.com/users/Hermit-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hermit-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hermit-w/subscriptions",
"organizations_url": "https://api.github.com/users/Hermit-w/orgs",
"repos_url": "https://api.github.com/users/Hermit-w/repos",
"events_url": "https://api.github.com/users/Hermit-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hermit-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41092/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41091
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41091/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41091/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41091/events
|
https://github.com/huggingface/transformers/pull/41091
| 3,444,349,998
|
PR_kwDOCUB6oc6qBegK
| 41,091
|
fix wrong height and width when read video use torchvision
|
{
"login": "Juude",
"id": 2675838,
"node_id": "MDQ6VXNlcjI2NzU4Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2675838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Juude",
"html_url": "https://github.com/Juude",
"followers_url": "https://api.github.com/users/Juude/followers",
"following_url": "https://api.github.com/users/Juude/following{/other_user}",
"gists_url": "https://api.github.com/users/Juude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Juude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Juude/subscriptions",
"organizations_url": "https://api.github.com/users/Juude/orgs",
"repos_url": "https://api.github.com/users/Juude/repos",
"events_url": "https://api.github.com/users/Juude/events{/privacy}",
"received_events_url": "https://api.github.com/users/Juude/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:02:12
| 2025-09-23T12:36:26
| 2025-09-23T12:35:45
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41091",
"html_url": "https://github.com/huggingface/transformers/pull/41091",
"diff_url": "https://github.com/huggingface/transformers/pull/41091.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41091.patch",
"merged_at": "2025-09-23T12:35:45"
}
|
# What doe this pr do?
Fix issue #41090
fix wrong height and width when read video use torchvision
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41091/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41091/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41090
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41090/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41090/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41090/events
|
https://github.com/huggingface/transformers/issues/41090
| 3,444,344,421
|
I_kwDOCUB6oc7NTIZl
| 41,090
|
wrong metadata of widht and height when read video use read_video_torchvision
|
{
"login": "Juude",
"id": 2675838,
"node_id": "MDQ6VXNlcjI2NzU4Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2675838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Juude",
"html_url": "https://github.com/Juude",
"followers_url": "https://api.github.com/users/Juude/followers",
"following_url": "https://api.github.com/users/Juude/following{/other_user}",
"gists_url": "https://api.github.com/users/Juude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Juude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Juude/subscriptions",
"organizations_url": "https://api.github.com/users/Juude/orgs",
"repos_url": "https://api.github.com/users/Juude/repos",
"events_url": "https://api.github.com/users/Juude/events{/privacy}",
"received_events_url": "https://api.github.com/users/Juude/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T09:00:33
| 2025-09-24T13:18:15
| 2025-09-24T13:18:15
|
CONTRIBUTOR
| null | null | null | null |
### System Info
the bug still exist in master branch
### Reproduction
1. infer video vlm like smolvlm-video-256m
2. if using torchvision to decode video, will run in to function `read_video_torchvision`
3. attach debugger and you will notice that the height is always 3
### Expected behavior
the height variable should be video's height
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41090/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41089
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41089/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41089/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41089/events
|
https://github.com/huggingface/transformers/pull/41089
| 3,444,310,948
|
PR_kwDOCUB6oc6qBWkk
| 41,089
|
Fix EXAONE-4.0 dummy id
|
{
"login": "lkm2835",
"id": 30465912,
"node_id": "MDQ6VXNlcjMwNDY1OTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/30465912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkm2835",
"html_url": "https://github.com/lkm2835",
"followers_url": "https://api.github.com/users/lkm2835/followers",
"following_url": "https://api.github.com/users/lkm2835/following{/other_user}",
"gists_url": "https://api.github.com/users/lkm2835/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkm2835/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkm2835/subscriptions",
"organizations_url": "https://api.github.com/users/lkm2835/orgs",
"repos_url": "https://api.github.com/users/lkm2835/repos",
"events_url": "https://api.github.com/users/lkm2835/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkm2835/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T08:50:22
| 2025-09-29T16:31:28
| 2025-09-29T16:30:55
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41089",
"html_url": "https://github.com/huggingface/transformers/pull/41089",
"diff_url": "https://github.com/huggingface/transformers/pull/41089.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41089.patch",
"merged_at": "2025-09-29T16:30:55"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/pull/39129#issuecomment-3322822168
LGAI-EXAONE/EXAONE-4.0-Instruct -> [LGAI-EXAONE/EXAONE-4.0-32B](https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41089/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41088
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41088/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41088/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41088/events
|
https://github.com/huggingface/transformers/pull/41088
| 3,444,296,335
|
PR_kwDOCUB6oc6qBTf8
| 41,088
|
Remove infer_device
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T08:45:23
| 2025-10-09T14:43:07
| 2025-10-09T14:05:39
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41088",
"html_url": "https://github.com/huggingface/transformers/pull/41088",
"diff_url": "https://github.com/huggingface/transformers/pull/41088.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41088.patch",
"merged_at": "2025-10-09T14:05:39"
}
|
# What does this PR do?
Remove `infer_device`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41088/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41087
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41087/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41087/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41087/events
|
https://github.com/huggingface/transformers/pull/41087
| 3,444,231,659
|
PR_kwDOCUB6oc6qBF37
| 41,087
|
Fix typos in documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T08:26:05
| 2025-09-23T11:28:14
| 2025-09-23T11:27:04
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41087",
"html_url": "https://github.com/huggingface/transformers/pull/41087",
"diff_url": "https://github.com/huggingface/transformers/pull/41087.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41087.patch",
"merged_at": "2025-09-23T11:27:04"
}
|
# What does this PR do?
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41087/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41086
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41086/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41086/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41086/events
|
https://github.com/huggingface/transformers/pull/41086
| 3,444,011,777
|
PR_kwDOCUB6oc6qAXOj
| 41,086
|
Fix argument name in benchmarking script
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T07:14:00
| 2025-09-23T11:05:27
| 2025-09-23T11:05:27
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41086",
"html_url": "https://github.com/huggingface/transformers/pull/41086",
"diff_url": "https://github.com/huggingface/transformers/pull/41086.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41086.patch",
"merged_at": "2025-09-23T11:05:27"
}
|
# What does this PR do?
I made a mistake in https://github.com/huggingface/transformers/pull/41047 renaming an argument, this PR fixes that.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41086/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41085
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41085/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41085/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41085/events
|
https://github.com/huggingface/transformers/issues/41085
| 3,443,973,935
|
I_kwDOCUB6oc7NRt8v
| 41,085
|
a small bug in qwen3_moe
|
{
"login": "shiwanghua",
"id": 29914854,
"node_id": "MDQ6VXNlcjI5OTE0ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29914854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shiwanghua",
"html_url": "https://github.com/shiwanghua",
"followers_url": "https://api.github.com/users/shiwanghua/followers",
"following_url": "https://api.github.com/users/shiwanghua/following{/other_user}",
"gists_url": "https://api.github.com/users/shiwanghua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shiwanghua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shiwanghua/subscriptions",
"organizations_url": "https://api.github.com/users/shiwanghua/orgs",
"repos_url": "https://api.github.com/users/shiwanghua/repos",
"events_url": "https://api.github.com/users/shiwanghua/events{/privacy}",
"received_events_url": "https://api.github.com/users/shiwanghua/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-23T07:01:06
| 2025-10-23T08:02:37
| null |
NONE
| null | null | null | null |
### System Info
https://github.com/huggingface/transformers/blob/cbb290ec23ccd9b5c1d1ff4d333477449891debb/src/transformers/models/qwen3_moe/modular_qwen3_moe.py#L106
The names of the first and second variables should be exchanged:
```
top_x, idx = torch.where(expert_mask[expert_idx].squeeze(0))
```
In the subsequent code, replace the topk with idx and vice versa.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
wrong name meaning
### Expected behavior
change the codes
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41085/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41084
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41084/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41084/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41084/events
|
https://github.com/huggingface/transformers/issues/41084
| 3,443,922,628
|
I_kwDOCUB6oc7NRhbE
| 41,084
|
Set Block Decoding
|
{
"login": "davidmrau",
"id": 20661461,
"node_id": "MDQ6VXNlcjIwNjYxNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/20661461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidmrau",
"html_url": "https://github.com/davidmrau",
"followers_url": "https://api.github.com/users/davidmrau/followers",
"following_url": "https://api.github.com/users/davidmrau/following{/other_user}",
"gists_url": "https://api.github.com/users/davidmrau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidmrau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidmrau/subscriptions",
"organizations_url": "https://api.github.com/users/davidmrau/orgs",
"repos_url": "https://api.github.com/users/davidmrau/repos",
"events_url": "https://api.github.com/users/davidmrau/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidmrau/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-23T06:42:35
| 2025-10-10T06:03:28
| null |
NONE
| null | null | null | null |
### Feature request
Adding Set Block Decoding for Training and inference.
https://huggingface.co/papers/2509.04185
### Motivation
Speeding up generation time with minimal additional fine-tuning.
### Your contribution
Could implement a first draft.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41084/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41084/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41083
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41083/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41083/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41083/events
|
https://github.com/huggingface/transformers/pull/41083
| 3,443,900,235
|
PR_kwDOCUB6oc6p__Ww
| 41,083
|
Fix attention sink implementation in flex attention
|
{
"login": "SamuelBarryCS",
"id": 127697809,
"node_id": "U_kgDOB5yDkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/127697809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelBarryCS",
"html_url": "https://github.com/SamuelBarryCS",
"followers_url": "https://api.github.com/users/SamuelBarryCS/followers",
"following_url": "https://api.github.com/users/SamuelBarryCS/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelBarryCS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelBarryCS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelBarryCS/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelBarryCS/orgs",
"repos_url": "https://api.github.com/users/SamuelBarryCS/repos",
"events_url": "https://api.github.com/users/SamuelBarryCS/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelBarryCS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 8093875025,
"node_id": "LA_kwDOCUB6oc8AAAAB4m67UQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/flex%20attention",
"name": "flex attention",
"color": "aaaaaa",
"default": false,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T06:33:16
| 2025-09-29T14:33:29
| 2025-09-29T14:33:04
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41083",
"html_url": "https://github.com/huggingface/transformers/pull/41083",
"diff_url": "https://github.com/huggingface/transformers/pull/41083.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41083.patch",
"merged_at": "2025-09-29T14:33:04"
}
|
## What
- Fixes #41026 where @jonny-so correctly identified that attention sinks were being incorrectly applied within the [score_mod] function of flex attention
- The previous implementation attempted to apply attention sinks by manipulating pre-softmax scores within score_mod, which is incorrect because attention sinks require access to the full attention matrix after softmax normalization ==> this fix moves attention sink application after flex attention computation to renormalize the output correctly
- As a result, we can now run flex attention with attention sink (which was impossible before because it would crash when s_aux was provided)
## Test performed
- No major logic change, existing tests are thus still passing
- Created a small script `test_attention_sink.py` to test the old vs. new version of the attention sink implementation in flex attention.` flex_attention_old.py` is simply a copy/paste from the file `flex_attention.py` on main, and this script + `test_attention_sink.py` are of course both meant to be deleted before merging. The fix is confirmed by the script output:
```
Testing attention sinks implementation...
Input shapes: query torch.Size([1, 2, 4, 16]), key torch.Size([1, 2, 4, 16]), value torch.Size([1, 2, 4, 16])
s_aux shape: torch.Size([2])
--- Old Implementation ---
Status: FAILED - ['LoweringException: AssertionError: wrong ndim () [2]', ' target: flex_attention']
--- New Implementation ---
Before calling flex_attention_new...
torch.Size([1, 2, 4, 16]) torch.Size([1, 2, 4]) torch.Size([2])
After calling flex_attention_new...
Status: SUCCESS
Output: (tensor([[[[-0.4479, -0.0579, 0.0891, -0.1978, 0.4921, -0.0836, -0.4697,
0.2943, 0.4493, -0.2208, -0.1416, -0.0743, -0.2410, 0.4453,
-0.3093, 0.0605],
[...]
```
## How to review
- Check diff
- Run `test_attention_sink.py`
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41083/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41082
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41082/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41082/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41082/events
|
https://github.com/huggingface/transformers/pull/41082
| 3,443,529,378
|
PR_kwDOCUB6oc6p-xsZ
| 41,082
|
Fix
|
{
"login": "SamuelBarryCS",
"id": 127697809,
"node_id": "U_kgDOB5yDkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/127697809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelBarryCS",
"html_url": "https://github.com/SamuelBarryCS",
"followers_url": "https://api.github.com/users/SamuelBarryCS/followers",
"following_url": "https://api.github.com/users/SamuelBarryCS/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelBarryCS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelBarryCS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelBarryCS/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelBarryCS/orgs",
"repos_url": "https://api.github.com/users/SamuelBarryCS/repos",
"events_url": "https://api.github.com/users/SamuelBarryCS/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelBarryCS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T03:36:44
| 2025-09-23T03:37:46
| 2025-09-23T03:37:46
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41082",
"html_url": "https://github.com/huggingface/transformers/pull/41082",
"diff_url": "https://github.com/huggingface/transformers/pull/41082.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41082.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "SamuelBarryCS",
"id": 127697809,
"node_id": "U_kgDOB5yDkQ",
"avatar_url": "https://avatars.githubusercontent.com/u/127697809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelBarryCS",
"html_url": "https://github.com/SamuelBarryCS",
"followers_url": "https://api.github.com/users/SamuelBarryCS/followers",
"following_url": "https://api.github.com/users/SamuelBarryCS/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelBarryCS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelBarryCS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelBarryCS/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelBarryCS/orgs",
"repos_url": "https://api.github.com/users/SamuelBarryCS/repos",
"events_url": "https://api.github.com/users/SamuelBarryCS/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelBarryCS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41082/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41081
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41081/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41081/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41081/events
|
https://github.com/huggingface/transformers/issues/41081
| 3,443,526,200
|
I_kwDOCUB6oc7NQAo4
| 41,081
|
Add support for LLaVA-OneVision-1.5 Multi-Modal Model
|
{
"login": "g1050",
"id": 46085963,
"node_id": "MDQ6VXNlcjQ2MDg1OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/46085963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g1050",
"html_url": "https://github.com/g1050",
"followers_url": "https://api.github.com/users/g1050/followers",
"following_url": "https://api.github.com/users/g1050/following{/other_user}",
"gists_url": "https://api.github.com/users/g1050/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g1050/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g1050/subscriptions",
"organizations_url": "https://api.github.com/users/g1050/orgs",
"repos_url": "https://api.github.com/users/g1050/repos",
"events_url": "https://api.github.com/users/g1050/events{/privacy}",
"received_events_url": "https://api.github.com/users/g1050/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-23T03:35:01
| 2025-09-24T01:57:46
| null |
NONE
| null | null | null | null |
### Model description
[**LLaVA-OneVision-1.5**](https://github.com/EvolvingLMMs-Lab/LLaVA-OneVision-1.5) introduces a novel family of **fully open-source** Large Multimodal Models (LMMs) that achieves **state-of-the-art performance** with substantially **lower cost** through training on **native resolution** images.
1. **Superior Performance**
A family of fully open-source large multimodal models demonstrating **superior performance** across multiple multimodal benchmarks, **outperforming Qwen2.5-VL** in most evaluation tasks.
2. **High-Quality Data at Scale**
Meticulously curated **mid-training and SFT data** with rigorous filtering and quality control.
- Concept-balanced, highly diverse, high-quality caption data
- Comprehensive instruction fine-tuning data covering a wide range of tasks
3. **Ultra-Efficient Training Framework**
Complete end-to-end training framework designed for maximum efficiency:
- **$16K total budget** for full model training
- **45% HFU efficiency** on A100 GPUs ($0.6 per GPU/Hour)
- Built on **MegatronLM** with support for **MoE**, **FP8**, and **long sequence parallelization**
- Optimized codebase for cost-effective scaling
4. **Fully Open Framework** for community access and reproducibility:
- ✅ High-quality mid-training & SFT data
- ✅ Complete training framework & code
- ✅ Training recipes & configurations
- ✅ Base & instruct model checkpoints
- ✅ Comprehensive training logs & metrics
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
[1] Arxiv: https://arxiv.org/abs/2507.20025
[2] Huggingface: https://huggingface.co/lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
### Models on the Hub
1. https://huggingface.co/lmms-lab/LLaVA-OneVision-1.5-4B-stage0
2. https://huggingface.co/lmms-lab/LLaVA-OneVision-1.5-8B-stage0
3. https://huggingface.co/lmms-lab/LLaVA-OneVision-1.5-8B-Instruct
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41081/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41080
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41080/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41080/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41080/events
|
https://github.com/huggingface/transformers/issues/41080
| 3,443,461,853
|
I_kwDOCUB6oc7NPw7d
| 41,080
|
FileNotFoundError: Missing safe tensors file: D:\docling\docling_model\model.safetensors
|
{
"login": "rinno1027",
"id": 58933100,
"node_id": "MDQ6VXNlcjU4OTMzMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/58933100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rinno1027",
"html_url": "https://github.com/rinno1027",
"followers_url": "https://api.github.com/users/rinno1027/followers",
"following_url": "https://api.github.com/users/rinno1027/following{/other_user}",
"gists_url": "https://api.github.com/users/rinno1027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rinno1027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rinno1027/subscriptions",
"organizations_url": "https://api.github.com/users/rinno1027/orgs",
"repos_url": "https://api.github.com/users/rinno1027/repos",
"events_url": "https://api.github.com/users/rinno1027/events{/privacy}",
"received_events_url": "https://api.github.com/users/rinno1027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T03:06:20
| 2025-09-24T00:48:19
| 2025-09-24T00:48:19
|
NONE
| null | null | null | null |
### System Info
I got this error when running the following code:
```py
from docling.datamodel.pipeline_options import PdfPipelineOptions, RapidOcrOptions
from docling.document_converter import ConversionResult, DocumentConverter, InputFormat, PdfFormatOption
def main():
source = "D:/docling/docling_pdf.pdf"
rapidocr_path = "D:/docling/rapidocr_model/onnx/PP-OCRv4"
artifacts_path="D:/docling/docling_model"
det_model_path = rapidocr_path+"/det/ch_PP-OCRv4_det_infer.onnx"
rec_model_path = rapidocr_path+"/rec/ch_PP-OCRv4_rec_server_infer.onnx"
cls_model_path = rapidocr_path+"/cls/ch_ppocr_mobile_v2.0_cls_infer.onnx"
ocr_options = RapidOcrOptions(
det_model_path=det_model_path,
rec_model_path=rec_model_path,
cls_model_path=cls_model_path,
)
pipeline_options = PdfPipelineOptions(ocr_options=ocr_options,artifacts_path=artifacts_path)
converter = DocumentConverter(
format_options={InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)}
)
conversion_result: ConversionResult = converter.convert(source=source)
doc = conversion_result.document
md = doc.export_to_markdown()
print(md)
if __name__ == "__main__":
main()
```
I have already downloaded the docling-models and rapidocr,and my docling version is 2.54.0,python version 3.11.13.It seems that a file is missing.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
from docling.datamodel.pipeline_options import PdfPipelineOptions, RapidOcrOptions
from docling.document_converter import ConversionResult, DocumentConverter, InputFormat, PdfFormatOption
def main():
source = "D:/docling/docling_pdf.pdf"
rapidocr_path = "D:/docling/rapidocr_model/onnx/PP-OCRv4"
artifacts_path="D:/docling/docling_model"
det_model_path = rapidocr_path+"/det/ch_PP-OCRv4_det_infer.onnx"
rec_model_path = rapidocr_path+"/rec/ch_PP-OCRv4_rec_server_infer.onnx"
cls_model_path = rapidocr_path+"/cls/ch_ppocr_mobile_v2.0_cls_infer.onnx"
ocr_options = RapidOcrOptions(
det_model_path=det_model_path,
rec_model_path=rec_model_path,
cls_model_path=cls_model_path,
)
pipeline_options = PdfPipelineOptions(ocr_options=ocr_options,artifacts_path=artifacts_path)
converter = DocumentConverter(
format_options={InputFormat.PDF: PdfFormatOption(pipeline_options=pipeline_options)}
)
conversion_result: ConversionResult = converter.convert(source=source)
doc = conversion_result.document
md = doc.export_to_markdown()
print(md)
if __name__ == "__main__":
main()
```
### Expected behavior
work correctly
|
{
"login": "rinno1027",
"id": 58933100,
"node_id": "MDQ6VXNlcjU4OTMzMTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/58933100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rinno1027",
"html_url": "https://github.com/rinno1027",
"followers_url": "https://api.github.com/users/rinno1027/followers",
"following_url": "https://api.github.com/users/rinno1027/following{/other_user}",
"gists_url": "https://api.github.com/users/rinno1027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rinno1027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rinno1027/subscriptions",
"organizations_url": "https://api.github.com/users/rinno1027/orgs",
"repos_url": "https://api.github.com/users/rinno1027/repos",
"events_url": "https://api.github.com/users/rinno1027/events{/privacy}",
"received_events_url": "https://api.github.com/users/rinno1027/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41080/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41079
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41079/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41079/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41079/events
|
https://github.com/huggingface/transformers/issues/41079
| 3,443,240,291
|
I_kwDOCUB6oc7NO61j
| 41,079
|
Intern-S1 not working
|
{
"login": "Django-Jiang",
"id": 43953876,
"node_id": "MDQ6VXNlcjQzOTUzODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/43953876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Django-Jiang",
"html_url": "https://github.com/Django-Jiang",
"followers_url": "https://api.github.com/users/Django-Jiang/followers",
"following_url": "https://api.github.com/users/Django-Jiang/following{/other_user}",
"gists_url": "https://api.github.com/users/Django-Jiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Django-Jiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Django-Jiang/subscriptions",
"organizations_url": "https://api.github.com/users/Django-Jiang/orgs",
"repos_url": "https://api.github.com/users/Django-Jiang/repos",
"events_url": "https://api.github.com/users/Django-Jiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Django-Jiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T00:44:01
| 2025-09-23T04:54:00
| 2025-09-23T04:54:00
|
NONE
| null | null | null | null |
### System Info
I am using vllm=0.10.2 to host intern-s1 model. And with the lastest 4.56.2, I will get error
```
INFO 09-23 00:14:32 [__init__.py:216] Automatically detected platform cuda. [1;36m(APIServer pid=2398847)[0;0m INFO 09-23 00:14:36 [api_server.py:1896] vLLM API server version 0.10.2 [1;36m(APIServer pid=2398847)[0;0m INFO 09-23 00:14:36 [utils.py:328] non-default args: {'model_tag': 'internlm/Intern-S1', 'enable_auto_tool_choice': True, 'tool_call_parser': 'internlm', 'model': 'internlm/Intern-S1', 'trust_remote_code': True, 'reasoning_parser': 'deepseek_r1', 'tensor_parallel_size': 8} [1;36m(APIServer pid=2398847)[0;0m INFO 09-23 00:14:45 [__init__.py:742] Resolved architecture: InternS1ForConditionalGeneration [1;36m(APIServer pid=2398847)[0;0m INFO 09-23 00:14:45 [__init__.py:1815] Using max model len 65536 [1;36m(APIServer pid=2398847)[0;0m INFO 09-23 00:14:45 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=8192. [1;36m(APIServer pid=2398847)[0;0m WARNING 09-23 00:14:46 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. [1;36m(APIServer pid=2398847)[0;0m WARNING 09-23 00:14:46 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. INFO 09-23 00:14:50 [__init__.py:216] Automatically detected platform cuda. [1;36m(EngineCore_DP0 pid=2399427)[0;0m INFO 09-23 00:14:54 [core.py:654] Waiting for init message from front-end. [1;36m(EngineCore_DP0 pid=2399427)[0;0m INFO 09-23 00:14:54 [core.py:76] Initializing a V1 LLM engine (v0.10.2) with config: model='internlm/Intern-S1', speculative_config=None, tokenizer='internlm/Intern-S1', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=65536, download_dir=None, load_format=auto, tensor_parallel_size=8, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend='deepseek_r1'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=internlm/Intern-S1, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2","vllm.mamba_mixer","vllm.short_conv","vllm.linear_attention","vllm.plamo2_mamba_mixer","vllm.gdn_attention"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":1,"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":512,"local_cache_dir":null} [1;36m(EngineCore_DP0 pid=2399427)[0;0m WARNING 09-23 00:14:54 [multiproc_worker_utils.py:273] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. [1;36m(EngineCore_DP0 pid=2399427)[0;0m INFO 09-23 00:14:54 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3, 4, 5, 6, 7], buffer_handle=(8, 16777216, 10, 'psm_1ff605e3'), local_subscribe_addr='ipc:///tmp/fbc7fd4d-e119-4207-a6ab-34a78d109128', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:14:58 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:06 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:07 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_84a1c79a'), local_subscribe_addr='ipc:///tmp/2ad30471-7788-44eb-815c-0d5a5ec25bcd', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:11 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_48594a46'), local_subscribe_addr='ipc:///tmp/bb2fdb76-4540-45fa-ae72-4822887f0be1', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:14 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:19 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_8a1d8c38'), local_subscribe_addr='ipc:///tmp/3b8d3f8e-6f69-41ce-9436-9cc34182f5b8', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:21 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:26 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1c95f55b'), local_subscribe_addr='ipc:///tmp/5db2f959-9229-412a-bb24-4bad6374fd3f', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:28 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:32 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_4b416363'), local_subscribe_addr='ipc:///tmp/f80da31e-0af5-48fd-8587-89c1d3b17753', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:35 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:39 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1a3901d2'), local_subscribe_addr='ipc:///tmp/d421a32a-9332-4175-b9d6-d27126ecd7a3', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:42 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:46 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_dfff5792'), local_subscribe_addr='ipc:///tmp/2a51e7d4-efc4-4c32-82a3-0a413f300506', remote_subscribe_addr=None, remote_addr_ipv6=False) INFO 09-23 00:15:48 [__init__.py:216] Automatically detected platform cuda. INFO 09-23 00:15:52 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_2994d45a'), local_subscribe_addr='ipc:///tmp/1a506e90-70e1-4ca8-a1be-be84c531d2f7', remote_subscribe_addr=None, remote_addr_ipv6=False) [Gloo] Rank 0 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 6 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 1 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 4 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 5 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 7 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 6 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 0 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 1 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 4 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 7 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 5 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 INFO 09-23 00:15:53 [__init__.py:1433] Found nccl from library libnccl.so.2 INFO 09-23 00:15:53 [pynccl.py:70] vLLM is using nccl==2.27.3 WARNING 09-23 00:16:03 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:03 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:03 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:04 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:04 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:04 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:04 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. WARNING 09-23 00:16:04 [custom_all_reduce.py:144] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly. INFO 09-23 00:16:04 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_97688550'), local_subscribe_addr='ipc:///tmp/7f3f5aee-46d7-4ccd-8315-e0d0225607ca', remote_subscribe_addr=None, remote_addr_ipv6=False) [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 6 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 3 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 6 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 6, EP rank 6 [Gloo] Rank 5 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 1 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 4 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 0 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 5 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 5, EP rank 5 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 3 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 3, EP rank 3 [Gloo] Rank 2 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 [Gloo] Rank 7 is connected to 7 peer ranks. Expected number of connected peer ranks is : 7 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 4 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 4, EP rank 4 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 1 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 1, EP rank 1 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 7 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 7, EP rank 7 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 2 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 2, EP rank 2 INFO 09-23 00:16:04 [parallel_state.py:1165] rank 0 in world size 8 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. WARNING 09-23 00:16:04 [tokenizer.py:253] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead. WARNING 09-23 00:16:04 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. ERROR 09-23 00:16:09 [multiproc_executor.py:585] WorkerProc failed to start. ERROR 09-23 00:16:09 [multiproc_executor.py:585] Traceback (most recent call last): ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/inputs/registry.py", line 173, in call_hf_processor ERROR 09-23 00:16:09 [multiproc_executor.py:585] output = hf_processor(**data, ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/transformers/models/internvl/processing_internvl.py", line 226, in __call__ ERROR 09-23 00:16:09 [multiproc_executor.py:585] video_inputs = self.video_processor(videos=videos, **output_kwargs["videos_kwargs"]) ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/transformers/video_processing_utils.py", line 212, in __call__ ERROR 09-23 00:16:09 [multiproc_executor.py:585] return self.preprocess(videos, **kwargs) ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/transformers/video_processing_utils.py", line 393, in preprocess ERROR 09-23 00:16:09 [multiproc_executor.py:585] preprocessed_videos = self._preprocess(videos=videos, **kwargs) ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] TypeError: InternS1VideoProcessor._preprocess() missing 1 required positional argument: 'video_metadata' ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] The above exception was the direct cause of the following exception: ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] Traceback (most recent call last): ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 559, in worker_main ERROR 09-23 00:16:09 [multiproc_executor.py:585] worker = WorkerProc(*args, **kwargs) ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 420, in __init__ ERROR 09-23 00:16:09 [multiproc_executor.py:585] self.worker.init_device() ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 611, in init_device ERROR 09-23 00:16:09 [multiproc_executor.py:585] self.worker.init_device() # type: ignore ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in init_device ERROR 09-23 00:16:09 [multiproc_executor.py:585] self.model_runner: GPUModelRunner = GPUModelRunner( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 383, in __init__ ERROR 09-23 00:16:09 [multiproc_executor.py:585] self.mm_budget = MultiModalBudget( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/worker/utils.py", line 48, in __init__ ERROR 09-23 00:16:09 [multiproc_executor.py:585] .get_max_tokens_per_item_by_nonzero_modality(model_config, ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/registry.py", line 168, in get_max_tokens_per_item_by_nonzero_modality ERROR 09-23 00:16:09 [multiproc_executor.py:585] max_tokens_per_item = self.get_max_tokens_per_item_by_modality( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/registry.py", line 144, in get_max_tokens_per_item_by_modality ERROR 09-23 00:16:09 [multiproc_executor.py:585] return profiler.get_mm_max_contiguous_tokens( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/profiling.py", line 311, in get_mm_max_contiguous_tokens ERROR 09-23 00:16:09 [multiproc_executor.py:585] return self._get_mm_max_tokens(seq_len, ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/profiling.py", line 291, in _get_mm_max_tokens ERROR 09-23 00:16:09 [multiproc_executor.py:585] mm_inputs = self._get_dummy_mm_inputs(seq_len, mm_counts) ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/profiling.py", line 173, in _get_dummy_mm_inputs ERROR 09-23 00:16:09 [multiproc_executor.py:585] return self.processor.apply( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1808, in apply ERROR 09-23 00:16:09 [multiproc_executor.py:585] ) = self._cached_apply_hf_processor( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1598, in _cached_apply_hf_processor ERROR 09-23 00:16:09 [multiproc_executor.py:585] ) = self._apply_hf_processor_main( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1352, in _apply_hf_processor_main ERROR 09-23 00:16:09 [multiproc_executor.py:585] mm_processed_data = self._apply_hf_processor_mm_only( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1309, in _apply_hf_processor_mm_only ERROR 09-23 00:16:09 [multiproc_executor.py:585] _, mm_processed_data, _ = self._apply_hf_processor_text_mm( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1236, in _apply_hf_processor_text_mm ERROR 09-23 00:16:09 [multiproc_executor.py:585] processed_data = self._call_hf_processor( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/model_executor/models/interns1.py", line 348, in _call_hf_processor ERROR 09-23 00:16:09 [multiproc_executor.py:585] processed_outputs = super()._call_hf_processor( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1197, in _call_hf_processor ERROR 09-23 00:16:09 [multiproc_executor.py:585] return self.info.ctx.call_hf_processor( ERROR 09-23 00:16:09 [multiproc_executor.py:585] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-23 00:16:09 [multiproc_executor.py:585] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/inputs/registry.py", line 193, in call_hf_processor ERROR 09-23 00:16:09 [multiproc_executor.py:585] raise ValueError(msg) from exc ERROR 09-23 00:16:09 [multiproc_executor.py:585] ValueError: Failed to apply InternVLProcessor on data={'text': '<video>', 'videos': array([[[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ERROR 09-23 00:16:09 [multiproc_executor.py:585] [[255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] ..., ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255], ERROR 09-23 00:16:09 [multiproc_executor.py:585] [255, 255, 255]]]], shape=(255, 448, 448, 3))} with kwargs={'truncation': False} INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker INFO 09-23 00:16:09 [multiproc_executor.py:546] Parent process exited, terminating worker [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] EngineCore failed to start. [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] Traceback (most recent call last): [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 709, in run_engine_core [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] engine_core = EngineCoreProc(*args, **kwargs) [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 505, in __init__ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] super().__init__(vllm_config, executor_class, log_stats, [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 82, in __init__ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] self.model_executor = executor_class(vllm_config) [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 54, in __init__ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] self._init_executor() [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 99, in _init_executor [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] self.workers = WorkerProc.wait_for_ready(unready_workers) [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] File "/home/nsl/miniconda3/envs/sosbench/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py", line 497, in wait_for_ready [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] raise e from None [1;36m(EngineCore_DP0 pid=2399427)[0;0m ERROR 09-23 00:16:12 [core.py:718] Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause.
```
but if I degrade to v4.55.2, it can run without issue.
And the running cmd for vllm is as
```
vllm serve internlm/Intern-S1 \
--trust-remote-code \
--tensor-parallel-size 8 \
--enable-auto-tool-choice \
--reasoning-parser deepseek_r1 \
--tool-call-parser internlm
```
I am using 8*h100 for this test
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. install vllm 0.10.2
2. run cmd
### Expected behavior
run without issue
|
{
"login": "Django-Jiang",
"id": 43953876,
"node_id": "MDQ6VXNlcjQzOTUzODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/43953876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Django-Jiang",
"html_url": "https://github.com/Django-Jiang",
"followers_url": "https://api.github.com/users/Django-Jiang/followers",
"following_url": "https://api.github.com/users/Django-Jiang/following{/other_user}",
"gists_url": "https://api.github.com/users/Django-Jiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Django-Jiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Django-Jiang/subscriptions",
"organizations_url": "https://api.github.com/users/Django-Jiang/orgs",
"repos_url": "https://api.github.com/users/Django-Jiang/repos",
"events_url": "https://api.github.com/users/Django-Jiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Django-Jiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41079/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41078
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41078/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41078/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41078/events
|
https://github.com/huggingface/transformers/pull/41078
| 3,443,220,291
|
PR_kwDOCUB6oc6p9wZw
| 41,078
|
Fix flash-attn for paged_attention when no kernels
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-23T00:29:51
| 2025-09-26T08:41:21
| 2025-09-26T08:41:21
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41078",
"html_url": "https://github.com/huggingface/transformers/pull/41078",
"diff_url": "https://github.com/huggingface/transformers/pull/41078.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41078.patch",
"merged_at": "2025-09-26T08:41:21"
}
|
Currently there is an issue when using `paged_attention` without the `kernels` package and with `flash_attn` installed: the `flash_attn_varlen_func` is no longer referenced when reaching its call. For some reason, by assigning it to a value, this can be avoided. The issue was observed on A100 and Mi325.
In the case the `flash_attn_varlen_func` function is not imported but still ends up being used, we also add a dummy function to raise a comprehensive error message.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41078/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41077
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41077/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41077/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41077/events
|
https://github.com/huggingface/transformers/pull/41077
| 3,443,165,873
|
PR_kwDOCUB6oc6p9k8d
| 41,077
|
Fix: add num_hidden_layers property to T5GemmaConfig and add test for use_cache
|
{
"login": "priyankabolem",
"id": 179726264,
"node_id": "U_kgDOCrZnuA",
"avatar_url": "https://avatars.githubusercontent.com/u/179726264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/priyankabolem",
"html_url": "https://github.com/priyankabolem",
"followers_url": "https://api.github.com/users/priyankabolem/followers",
"following_url": "https://api.github.com/users/priyankabolem/following{/other_user}",
"gists_url": "https://api.github.com/users/priyankabolem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/priyankabolem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/priyankabolem/subscriptions",
"organizations_url": "https://api.github.com/users/priyankabolem/orgs",
"repos_url": "https://api.github.com/users/priyankabolem/repos",
"events_url": "https://api.github.com/users/priyankabolem/events{/privacy}",
"received_events_url": "https://api.github.com/users/priyankabolem/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-22T23:54:57
| 2025-09-23T19:33:59
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41077",
"html_url": "https://github.com/huggingface/transformers/pull/41077",
"diff_url": "https://github.com/huggingface/transformers/pull/41077.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41077.patch",
"merged_at": null
}
|
This PR fixes a bug in T5GemmaConfig where the configuration did not expose num_hidden_layers, which is required by generation/cache utilities (use_cache=True).
• Added num_hidden_layers property to T5GemmaConfig.
• Ensured fallback to decoder’s layer count if not set explicitly.
• Added a unit test (test_generation_t5gemma.py) to verify generation runs successfully with cache enabled.
Testing
• Added test_generate_use_cache_works_for_t5gemma.
• Verified test passes locally with pytest.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41077/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41076
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41076/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41076/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41076/events
|
https://github.com/huggingface/transformers/pull/41076
| 3,442,285,604
|
PR_kwDOCUB6oc6p6hVI
| 41,076
|
:rotating_light: [`v5`] Remove headmasking
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9105758243,
"node_id": "LA_kwDOCUB6oc8AAAACHr7YIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for_v5?",
"name": "for_v5?",
"color": "35BC94",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T18:54:07
| 2025-09-30T14:18:42
| 2025-09-30T14:04:57
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41076",
"html_url": "https://github.com/huggingface/transformers/pull/41076",
"diff_url": "https://github.com/huggingface/transformers/pull/41076.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41076.patch",
"merged_at": "2025-09-30T14:04:57"
}
|
As per title, time to deprecate this
This includes anything in the docs, tests, and modeling
- `xxx head mask`
- `xxx_headmasking`
- `xxx_head_mask`
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41076/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41076/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41075
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41075/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41075/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41075/events
|
https://github.com/huggingface/transformers/pull/41075
| 3,442,037,439
|
PR_kwDOCUB6oc6p5qCs
| 41,075
|
Fix Qwen3 deterministic generation when do_sample=False and num_beams=1 for Greedy Decoding
|
{
"login": "Flakes342",
"id": 60060568,
"node_id": "MDQ6VXNlcjYwMDYwNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60060568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flakes342",
"html_url": "https://github.com/Flakes342",
"followers_url": "https://api.github.com/users/Flakes342/followers",
"following_url": "https://api.github.com/users/Flakes342/following{/other_user}",
"gists_url": "https://api.github.com/users/Flakes342/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flakes342/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flakes342/subscriptions",
"organizations_url": "https://api.github.com/users/Flakes342/orgs",
"repos_url": "https://api.github.com/users/Flakes342/repos",
"events_url": "https://api.github.com/users/Flakes342/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flakes342/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-22T17:42:55
| 2025-09-23T17:02:50
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41075",
"html_url": "https://github.com/huggingface/transformers/pull/41075",
"diff_url": "https://github.com/huggingface/transformers/pull/41075.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41075.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes #41060
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
# Problem
Qwen3 generate() was non-deterministic even with do_sample=False due to merged model defaults
(top_k, top_p, temperature) overriding the requested greedy decoding (do_sample = False and num_beams = 1 as per the documentation).
# Reproduction
<img width="2104" height="1204" alt="image" src="https://github.com/user-attachments/assets/a4230f92-f539-44c1-8c4f-f29d5456c8c3" />
Even though Greedy Decoding is supposed to be deterministic because it doesn’t sample from the probability distribution but instead always takes the argmax, given the same model, same input, and same context, it will always produce the exact same output sequence. But in our case we can see different outputs for same input with greedy decoding flags enabled.
# Solution
Enforce temperature=1.0, top_k=0, top_p=1.0 whenever do_sample=False and num_beams = 1 in _prepare_generation_config in generation/utils.py
# Tests
Added a simple regression test to ensure future releases maintain deterministic behavior by passing the same input twice to Qwen3-0.6B model.
## Additional Notes
Please feel free to let me know if there are any mistakes or oversight and I'd be happy to fix it and resubmit this PR. Thank you!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41075/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41074
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41074/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41074/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41074/events
|
https://github.com/huggingface/transformers/pull/41074
| 3,441,962,136
|
PR_kwDOCUB6oc6p5Zas
| 41,074
|
Fix Qwen3 deterministic generation when do_sample=False and num_beams=1 for Greedy Decoding
|
{
"login": "Flakes342",
"id": 60060568,
"node_id": "MDQ6VXNlcjYwMDYwNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60060568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flakes342",
"html_url": "https://github.com/Flakes342",
"followers_url": "https://api.github.com/users/Flakes342/followers",
"following_url": "https://api.github.com/users/Flakes342/following{/other_user}",
"gists_url": "https://api.github.com/users/Flakes342/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flakes342/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flakes342/subscriptions",
"organizations_url": "https://api.github.com/users/Flakes342/orgs",
"repos_url": "https://api.github.com/users/Flakes342/repos",
"events_url": "https://api.github.com/users/Flakes342/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flakes342/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T17:21:15
| 2025-09-22T17:22:40
| 2025-09-22T17:22:40
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41074",
"html_url": "https://github.com/huggingface/transformers/pull/41074",
"diff_url": "https://github.com/huggingface/transformers/pull/41074.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41074.patch",
"merged_at": null
}
|
# What does this PR do?
Fixes #41060
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
# Problem
Qwen3 generate() was non-deterministic even with do_sample=False due to merged model defaults
(top_k, top_p, temperature) overriding the requested greedy decoding (do_sample = False and num_beams = 1 as per the documentation).
# Reproduction
<img width="2104" height="1204" alt="image" src="https://github.com/user-attachments/assets/a4230f92-f539-44c1-8c4f-f29d5456c8c3" />
Even though Greedy Decoding is supposed to be deterministic because it doesn’t sample from the probability distribution but instead always takes the argmax, given the same model, same input, and same context, it will always produce the exact same output sequence. But in our case we can see different outputs for same input with greedy decoding flags enabled.
# Solution
Enforce temperature=1.0, top_k=0, top_p=1.0 whenever do_sample=False and num_beams = 1 in _prepare_generation_config in generation/utils.py
# Tests
Added a simple regression test to ensure future releases maintain deterministic behavior by passing the same input twice to Qwen3-0.6B model.
## Additional Notes
Please feel free to let me know if there are any mistakes or oversight and I'd be happy to fix it and resubmit this PR. Thank you!
|
{
"login": "Flakes342",
"id": 60060568,
"node_id": "MDQ6VXNlcjYwMDYwNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60060568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flakes342",
"html_url": "https://github.com/Flakes342",
"followers_url": "https://api.github.com/users/Flakes342/followers",
"following_url": "https://api.github.com/users/Flakes342/following{/other_user}",
"gists_url": "https://api.github.com/users/Flakes342/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flakes342/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flakes342/subscriptions",
"organizations_url": "https://api.github.com/users/Flakes342/orgs",
"repos_url": "https://api.github.com/users/Flakes342/repos",
"events_url": "https://api.github.com/users/Flakes342/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flakes342/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41074/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41073
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41073/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41073/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41073/events
|
https://github.com/huggingface/transformers/issues/41073
| 3,441,905,392
|
I_kwDOCUB6oc7NJ07w
| 41,073
|
`use_cache=True` does not work with T5GemmaForConditionalGeneration.generate()
|
{
"login": "hazemessamm",
"id": 11133593,
"node_id": "MDQ6VXNlcjExMTMzNTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11133593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hazemessamm",
"html_url": "https://github.com/hazemessamm",
"followers_url": "https://api.github.com/users/hazemessamm/followers",
"following_url": "https://api.github.com/users/hazemessamm/following{/other_user}",
"gists_url": "https://api.github.com/users/hazemessamm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hazemessamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hazemessamm/subscriptions",
"organizations_url": "https://api.github.com/users/hazemessamm/orgs",
"repos_url": "https://api.github.com/users/hazemessamm/repos",
"events_url": "https://api.github.com/users/hazemessamm/events{/privacy}",
"received_events_url": "https://api.github.com/users/hazemessamm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-22T17:04:14
| 2025-10-26T08:02:06
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.2
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- Huggingface_hub version: 0.35.0
- Safetensors version: 0.4.3
- Accelerate version: 1.10.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 0,1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: distributed
- Using GPU in script?: yes
- GPU type: NVIDIA H100 PCIe
### Who can help?
I am trying to use model.generate() with `use_cache=True`, but it raises the following error:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[11], [line 1](vscode-notebook-cell:?execution_count=11&line=1)
----> [1](vscode-notebook-cell:?execution_count=11&line=1) output = model.generate(**masked_encoded_sequence, max_new_tokens=15*2+2, use_cache=True, do_sample=False)
File /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:120, in context_decorator.<locals>.decorate_context(*args, **kwargs)
[117](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:117) @functools.wraps(func)
[118](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:118) def decorate_context(*args, **kwargs):
[119](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:119) with ctx_factory():
--> [120](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py:120) return func(*args, **kwargs)
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2399, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, use_model_defaults, custom_generate, **kwargs)
[2393](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2393) if (
[2394](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2394) inputs_tensor.shape[1] != input_ids_length
[2395](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2395) and model_input_name == "inputs_embeds"
[2396](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2396) and not self.config.is_encoder_decoder
[2397](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2397) ):
[2398](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2398) max_cache_length += inputs_tensor.shape[1]
-> [2399](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2399) self._prepare_cache_for_generation(
[2400](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2400) generation_config, model_kwargs, assistant_model, batch_size, max_cache_length
[2401](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2401) )
[2403](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2403) # 8. determine generation mode
[2404](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2404) generation_mode = generation_config.get_generation_mode(assistant_model)
File /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2007, in GenerationMixin._prepare_cache_for_generation(self, generation_config, model_kwargs, assistant_model, batch_size, max_cache_length)
[1999](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1999) model_kwargs[cache_name] = DynamicCache(**dynamic_cache_kwargs)
[2001](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2001) # Use DynamicCache instance by default. This will avoid back and forth from legacy format that
[2002](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2002) # keeps copying the cache thus using much more memory
[2003](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2003) else:
[2004](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2004) model_kwargs[cache_name] = (
[2005](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2005) DynamicCache(**dynamic_cache_kwargs)
[2006](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2006) if not requires_cross_attention_cache
-> [2007](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2007) else EncoderDecoderCache(DynamicCache(**dynamic_cache_kwargs), DynamicCache(**dynamic_cache_kwargs))
[2008](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:2008) )
File /usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1018, in DynamicCache.__init__(self, ddp_cache_data, config, offloading, offload_only_non_sliding)
[1014](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1014) layer_types = getattr(config, "layer_types", None)
[1015](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1015) if layer_types is None:
[1016](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1016) layer_types = [
[1017](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1017) "sliding_attention" if sliding_window is not None else "full_attention"
-> [1018](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1018) for _ in range(config.num_hidden_layers)
[1019](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1019) ]
[1020](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1020) # Some models have shared layers thus no cache is needed for them (e.g. Gemma3n)
[1021](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/cache_utils.py:1021) if hasattr(config, "num_kv_shared_layers"):
File /usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py:207, in PretrainedConfig.__getattribute__(self, key)
[205](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py:205) if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
[206](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py:206) key = super().__getattribute__("attribute_map")[key]
--> [207](https://vscode-remote+dev-002dcontainer-002b7b2273657474696e6754797065223a22636f6e7461696e6572222c22636f6e7461696e65724964223a22653136353536613237623564227d-0040ssh-002dremote-002bh100.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py:207) return super().__getattribute__(key)
AttributeError: 'T5GemmaConfig' object has no attribute 'num_hidden_layers'
```
@gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
encoder_config = T5GemmaModuleConfig(
vocab_size=33,
hidden_size=32,
intermediate_size=128,
num_hidden_layers=2,
num_attention_heads=4,
num_key_value_heads=4,
head_dim=32,
max_position_embeddings=1024, # noqa
tie_word_embeddings=False,
layer_types=["full_attention"] * 2,
rope_theta=10000,
bos_token_id=0,
eos_token_id=1,
pad_token_id=2,
)
decoder_config = T5GemmaModuleConfig(
vocab_size=33,
hidden_size=32,
intermediate_size=128,
num_hidden_layers=2,
num_attention_heads=4,
num_key_value_heads=4,
head_dim=32,
max_position_embeddings=1024, # noqa
tie_word_embeddings=False,
layer_types=["full_attention"] * 2,
rope_theta=10000,
bos_token_id=0,
eos_token_id=1,
pad_token_id=2,
)
t5_gemma_config = T5GemmaConfig(
encoder=encoder_config,
decoder=decoder_config,
vocab_size=33,
attn_implementation="eager",
)
model = T5GemmaForConditionalGeneration(t5_gemma_config)
model.generate(torch.randint(0, 33, (1, 10)), use_cache=True)
```
### Expected behavior
I expect it to work properly and handle whether the model is an encoder-decoder model or not, and check for `num_hidden_layers` in the `config.decoder.num_hidden_layers` in case the model is an encoder-decoder model.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41073/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41072
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41072/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41072/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41072/events
|
https://github.com/huggingface/transformers/pull/41072
| 3,441,888,524
|
PR_kwDOCUB6oc6p5JBY
| 41,072
|
handle flash slow tests
|
{
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T16:59:07
| 2025-09-26T23:31:10
| 2025-09-26T16:24:31
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41072",
"html_url": "https://github.com/huggingface/transformers/pull/41072",
"diff_url": "https://github.com/huggingface/transformers/pull/41072.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41072.patch",
"merged_at": "2025-09-26T16:24:31"
}
|
fixing broken slow / flash tests as we cannot pass the regular 4dim attention mask for flash
_update_causal_mask does not work with the patch cross attention , handling patch attn mask update for flash in the _prepare function
|
{
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41072/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41071
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41071/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41071/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41071/events
|
https://github.com/huggingface/transformers/pull/41071
| 3,441,760,088
|
PR_kwDOCUB6oc6p4sHU
| 41,071
|
extend gemma3n integration ut cases on XPU
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T16:25:43
| 2025-09-25T13:57:03
| 2025-09-25T13:46:38
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41071",
"html_url": "https://github.com/huggingface/transformers/pull/41071",
"diff_url": "https://github.com/huggingface/transformers/pull/41071.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41071.patch",
"merged_at": "2025-09-25T13:46:38"
}
|
@ydshieh , pls help review, thx very much.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41071/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41070
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41070/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41070/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41070/events
|
https://github.com/huggingface/transformers/pull/41070
| 3,441,744,418
|
PR_kwDOCUB6oc6p4ooy
| 41,070
|
trigger CI for #40889
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T16:21:26
| 2025-09-22T16:22:32
| 2025-09-22T16:21:58
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41070",
"html_url": "https://github.com/huggingface/transformers/pull/41070",
"diff_url": "https://github.com/huggingface/transformers/pull/41070.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41070.patch",
"merged_at": "2025-09-22T16:21:58"
}
|
# What does this PR do?
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41070/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41069
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41069/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41069/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41069/events
|
https://github.com/huggingface/transformers/pull/41069
| 3,441,615,399
|
PR_kwDOCUB6oc6p4MRt
| 41,069
|
Enable fa in amd docker
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:46:35
| 2025-09-26T11:57:58
| 2025-09-26T11:57:58
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41069",
"html_url": "https://github.com/huggingface/transformers/pull/41069",
"diff_url": "https://github.com/huggingface/transformers/pull/41069.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41069.patch",
"merged_at": "2025-09-26T11:57:58"
}
|
This PR adds `flash-attention` to the AMD docker, and fixes some `gemma3n` tests that started failing because of this. Some new test fail, but they are not AMD-exclusive failures, so out of scope for this PR. Also adds caching of url data that repeatedly fails to be downloaded on AMD CI (on `qwen2_5_vl`) and fixes a nit in the important models list.
I will run a manual CI run on my instance tonight to check the CI is fine, so please do not merge, but until then I appreciate the review.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41069/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41068
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41068/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41068/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41068/events
|
https://github.com/huggingface/transformers/pull/41068
| 3,441,610,974
|
PR_kwDOCUB6oc6p4LT0
| 41,068
|
Update quantization CI
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:45:18
| 2025-09-22T16:10:19
| 2025-09-22T16:10:16
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41068",
"html_url": "https://github.com/huggingface/transformers/pull/41068",
"diff_url": "https://github.com/huggingface/transformers/pull/41068.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41068.patch",
"merged_at": "2025-09-22T16:10:16"
}
|
# What does this PR do?
This PR updates the quantization CI. We remove from testing some methods that are not used that much + we upgrade torch and cuda version.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41068/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41067
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41067/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41067/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41067/events
|
https://github.com/huggingface/transformers/pull/41067
| 3,441,585,880
|
PR_kwDOCUB6oc6p4Fyj
| 41,067
|
Switch to `python:3.10-slim` for CircleCI docker images
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:38:38
| 2025-09-23T10:48:51
| 2025-09-23T10:48:49
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41067",
"html_url": "https://github.com/huggingface/transformers/pull/41067",
"diff_url": "https://github.com/huggingface/transformers/pull/41067.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41067.patch",
"merged_at": "2025-09-23T10:48:49"
}
|
# What does this PR do?
Python 3.9 is reaching EOL. Let's switch to python 3.10.
I am OK to switch before the day of EOL comes (which is someday in this October), but if the core maintainers want to wait until it comes, also fine for me.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41067/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41066
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41066/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41066/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41066/events
|
https://github.com/huggingface/transformers/pull/41066
| 3,441,541,999
|
PR_kwDOCUB6oc6p38BL
| 41,066
|
[tests] `CausalLMTester` automatically infers other test classes from `base_model_class` 🐛 🔫
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:27:39
| 2025-09-30T11:15:32
| 2025-09-29T13:05:08
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41066",
"html_url": "https://github.com/huggingface/transformers/pull/41066",
"diff_url": "https://github.com/huggingface/transformers/pull/41066.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41066.patch",
"merged_at": "2025-09-29T13:05:08"
}
|
# What does this PR do?
In this PR, I aim to reduce human error using `CausalLMTester`, fixing bugs in the process 🐛 🔫
Issues tackled:
1. Tester attributes
a. Before: tester class attributes where we set model classes were not validated -> we could set attributes with typos (e.g. `sequence_class = ...` instead of the expected `sequence_classification_class = ...`) -> silently skip tests
b. Now: only need set `base_model_class`, others are inferred. Typos in the other attributes are also detected, when we explicitly set them.
c. ⚠️ this change identified a LOT of places where we had typos ( = untested classes) ⚠️
2. `question_answering_class`
a. Before: no corresponding test
b. Now: generic test for this class
3. `all_model_classes`
a. Before: redundant with the tester class attributes where we set model classes -> easy to have mismatched information in these two attributes that should hold the same classes
b. Now: `all_model_classes` is only overwritten where it needs to be overwritten (i.e. in classes where `CausalLMTester`'s model tester attributes are not complete enough to contain all model classes). There is a single source of truth for most models.
c. ⚠️ in some models, this was hidding incomplete checks (e.g. missing pipeline map) because the model was relying on a custom `setUp` method ⚠️
4. Model-specific test issues (gemma family)
|
{
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41066/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41065
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41065/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41065/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41065/events
|
https://github.com/huggingface/transformers/pull/41065
| 3,441,535,235
|
PR_kwDOCUB6oc6p36g4
| 41,065
|
[tests] `CausalLMTester` now compains about typos 🐛 🔫
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:26:07
| 2025-09-22T15:35:10
| 2025-09-22T15:26:44
|
MEMBER
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41065",
"html_url": "https://github.com/huggingface/transformers/pull/41065",
"diff_url": "https://github.com/huggingface/transformers/pull/41065.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41065.patch",
"merged_at": null
}
|
# What does this PR do?
WIP
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41065/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41064
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41064/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41064/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41064/events
|
https://github.com/huggingface/transformers/pull/41064
| 3,441,484,391
|
PR_kwDOCUB6oc6p3vM2
| 41,064
|
Use device agnostic torch-amp
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:14:31
| 2025-09-30T09:39:57
| 2025-09-30T09:39:50
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41064",
"html_url": "https://github.com/huggingface/transformers/pull/41064",
"diff_url": "https://github.com/huggingface/transformers/pull/41064.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41064.patch",
"merged_at": null
}
|
# What does this PR do?
Before this PR, `torch.autocast` is used only for CPU. It is possible to extend it to other accelerator types.
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41064/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41063
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41063/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41063/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41063/events
|
https://github.com/huggingface/transformers/pull/41063
| 3,441,441,765
|
PR_kwDOCUB6oc6p3l7b
| 41,063
|
Improve documentation and errors in Mamba2-based models
|
{
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T15:04:34
| 2025-09-22T17:36:21
| 2025-09-22T17:36:21
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41063",
"html_url": "https://github.com/huggingface/transformers/pull/41063",
"diff_url": "https://github.com/huggingface/transformers/pull/41063.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41063.patch",
"merged_at": "2025-09-22T17:36:20"
}
|
# What does this PR do?
- Fixes a formatting issue with `<hfoptions>` in the [mamba2 docs](https://huggingface.co/docs/transformers/v4.56.2/en/model_doc/mamba2)
- As mentioned in the docs, Mamba2 state-spaces models are still not going to work with `AutoModel` I think it's helpful to link Mamba2-based architectures (Bamba, Falcon H1, Zamba2).
- Fixes an error message typo from "**because on of** (selective_state_update, causal_conv1d_fn, causal_conv1d_update)" to "because one of" in multiple Mamba2-based models. This is the error which currently pops up when using a state-spaces Mamba2 model with `AutoModel`.
- Zamba2 docs mention Mamba2 layers so I change their link from "state-space models (Specifically Mamba)" to "Mamba2"
Because the error message typo exists in their code, I would guess that Jamba and Zamba v1 are also based on Mamba v2? But it wasn't mentioned in their docs so I didn't want to overcorrect.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Mamba2 + Documentation: @stevhliu , @molbap , @ArthurZucker
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41063/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41062
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41062/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41062/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41062/events
|
https://github.com/huggingface/transformers/pull/41062
| 3,441,351,903
|
PR_kwDOCUB6oc6p3SVm
| 41,062
|
Remove self-assignment
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T14:43:44
| 2025-09-24T11:44:02
| 2025-09-24T11:43:17
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41062",
"html_url": "https://github.com/huggingface/transformers/pull/41062",
"diff_url": "https://github.com/huggingface/transformers/pull/41062.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41062.patch",
"merged_at": "2025-09-24T11:43:17"
}
|
# What does this PR do?
Most of them are copy and paste bugs..
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41062/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41061
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41061/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41061/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41061/events
|
https://github.com/huggingface/transformers/pull/41061
| 3,441,282,559
|
PR_kwDOCUB6oc6p3DHP
| 41,061
|
Tdt support
|
{
"login": "hainan-xv",
"id": 5440014,
"node_id": "MDQ6VXNlcjU0NDAwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5440014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hainan-xv",
"html_url": "https://github.com/hainan-xv",
"followers_url": "https://api.github.com/users/hainan-xv/followers",
"following_url": "https://api.github.com/users/hainan-xv/following{/other_user}",
"gists_url": "https://api.github.com/users/hainan-xv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hainan-xv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hainan-xv/subscriptions",
"organizations_url": "https://api.github.com/users/hainan-xv/orgs",
"repos_url": "https://api.github.com/users/hainan-xv/repos",
"events_url": "https://api.github.com/users/hainan-xv/events{/privacy}",
"received_events_url": "https://api.github.com/users/hainan-xv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-22T14:27:57
| 2025-09-22T19:55:50
| null |
NONE
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41061",
"html_url": "https://github.com/huggingface/transformers/pull/41061",
"diff_url": "https://github.com/huggingface/transformers/pull/41061.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41061.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41061/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41060
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41060/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41060/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41060/events
|
https://github.com/huggingface/transformers/issues/41060
| 3,441,235,144
|
I_kwDOCUB6oc7NHRTI
| 41,060
|
`do_sample` does not work in Qwen3 Model‘s `generate` method
|
{
"login": "Hermit-w",
"id": 129869143,
"node_id": "U_kgDOB72lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/129869143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hermit-w",
"html_url": "https://github.com/Hermit-w",
"followers_url": "https://api.github.com/users/Hermit-w/followers",
"following_url": "https://api.github.com/users/Hermit-w/following{/other_user}",
"gists_url": "https://api.github.com/users/Hermit-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hermit-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hermit-w/subscriptions",
"organizations_url": "https://api.github.com/users/Hermit-w/orgs",
"repos_url": "https://api.github.com/users/Hermit-w/repos",
"events_url": "https://api.github.com/users/Hermit-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hermit-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T14:18:26
| 2025-09-24T05:10:24
| 2025-09-24T05:10:23
|
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.1
- Platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.17.5
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA H800
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Simply run the scripts as follows:
```python
import os
import torch
import transformers
if __name__ == "__main__":
model_name = "Qwen/Qwen3-0.6B"
model_path = model_name
temperature: float = 1e-5
device = torch.device(f"cuda")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path).to(device).eval()
tokenizer = transformers.AutoTokenizer.from_pretrained(model_path)
prompt = "Hello, please introduce yourself.\n"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
with torch.inference_mode():
generate_config = transformers.GenerationConfig(
num_beams=1,
do_sample=False,# this doesn't work
max_new_tokens=200,
# top_k=1 # this works well
)
outputs = model.generate(inputs['input_ids'], attention_mask = inputs['attention_mask'], generation_config=generate_config)
print("".join(tokenizer.batch_decode(outputs)))
```
The document indicates that with `num_beams=1` and `do_sample=False`, it will behaviour like *greedy decoding*, which is expected to get same response with same prompt. But when I run the scripts twice the responses are different and it says the `do_sample=False` are modified to `do_sample=True`.

The result is shown in the following fig.

### Expected behavior
As shown in the output tips in the figure, I am expecting by setting `do_sample=False` in the `generate_config` explicitly, the value should not be modifed by model-specific defaults as other values.

However, when I try to set `top_k=1`, it works well to get same response even if it says the value may be ignored.

|
{
"login": "Hermit-w",
"id": 129869143,
"node_id": "U_kgDOB72lVw",
"avatar_url": "https://avatars.githubusercontent.com/u/129869143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hermit-w",
"html_url": "https://github.com/Hermit-w",
"followers_url": "https://api.github.com/users/Hermit-w/followers",
"following_url": "https://api.github.com/users/Hermit-w/following{/other_user}",
"gists_url": "https://api.github.com/users/Hermit-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hermit-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hermit-w/subscriptions",
"organizations_url": "https://api.github.com/users/Hermit-w/orgs",
"repos_url": "https://api.github.com/users/Hermit-w/repos",
"events_url": "https://api.github.com/users/Hermit-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hermit-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41060/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41059
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41059/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41059/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41059/events
|
https://github.com/huggingface/transformers/pull/41059
| 3,441,195,851
|
PR_kwDOCUB6oc6p2w6Q
| 41,059
|
Fix CI jobs being all red 🔴 (false positive)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T14:09:14
| 2025-09-22T14:51:03
| 2025-09-22T14:51:01
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41059",
"html_url": "https://github.com/huggingface/transformers/pull/41059",
"diff_url": "https://github.com/huggingface/transformers/pull/41059.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41059.patch",
"merged_at": "2025-09-22T14:51:01"
}
|
# What does this PR do?
In my previous PR #40981, there is an issue. When checking `tests_output.txt` produced by running `pytest` with `script` command, we should search instead `COMMAND_EXIT_CODE` of `PYTEST_EXIT_CODE`.
In the previous PR, I did run the workflow, but I ran it with a job that is know to fail, so I didn't realize the problem.
For a job that has no failure, because there is no `PYTEST_EXIT_CODE` in `tests_output.txt` and the exit code inferred is wrong, cause the job being red.
See https://github.com/huggingface/transformers/actions/runs/17874224681 for example.
This PR fix this target name issue.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41059/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41058
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41058/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41058/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41058/events
|
https://github.com/huggingface/transformers/pull/41058
| 3,441,159,334
|
PR_kwDOCUB6oc6p2o6Z
| 41,058
|
Remove mention of TensorFlow/Flax/JAX from English documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T14:01:37
| 2025-09-23T11:21:38
| 2025-09-23T11:14:11
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41058",
"html_url": "https://github.com/huggingface/transformers/pull/41058",
"diff_url": "https://github.com/huggingface/transformers/pull/41058.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41058.patch",
"merged_at": "2025-09-23T11:14:11"
}
|
# What does this PR do?
There is a remaining one.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41058/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41057
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41057/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41057/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41057/events
|
https://github.com/huggingface/transformers/pull/41057
| 3,441,101,081
|
PR_kwDOCUB6oc6p2cOZ
| 41,057
|
Remove tf and flax from Chinese documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T13:48:11
| 2025-09-23T11:43:48
| 2025-09-23T11:43:17
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41057",
"html_url": "https://github.com/huggingface/transformers/pull/41057",
"diff_url": "https://github.com/huggingface/transformers/pull/41057.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41057.patch",
"merged_at": "2025-09-23T11:43:17"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As the title says.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41057/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41056
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41056/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41056/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41056/events
|
https://github.com/huggingface/transformers/pull/41056
| 3,440,979,313
|
PR_kwDOCUB6oc6p2Bl_
| 41,056
|
Modernbert fix
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T13:21:38
| 2025-09-29T09:10:18
| 2025-09-29T08:52:45
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41056",
"html_url": "https://github.com/huggingface/transformers/pull/41056",
"diff_url": "https://github.com/huggingface/transformers/pull/41056.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41056.patch",
"merged_at": "2025-09-29T08:52:45"
}
|
There is a bug in `modernbert` where using FA2 changes the shapes of the hidden states when compared to `eager`.
This fixes it in the base model, and since it was already compensated for `ModernBertForMaskedLM` we cancel out the change there.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41056/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41055
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41055/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41055/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41055/events
|
https://github.com/huggingface/transformers/pull/41055
| 3,440,858,181
|
PR_kwDOCUB6oc6p1mmw
| 41,055
|
Remove <frameworkcontent> and <pt> tags from documentation
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T12:57:15
| 2025-09-22T14:43:26
| 2025-09-22T14:29:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41055",
"html_url": "https://github.com/huggingface/transformers/pull/41055",
"diff_url": "https://github.com/huggingface/transformers/pull/41055.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41055.patch",
"merged_at": "2025-09-22T14:29:51"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As we are moving to PyTorch.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41055/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41054
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41054/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41054/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41054/events
|
https://github.com/huggingface/transformers/pull/41054
| 3,440,728,889
|
PR_kwDOCUB6oc6p1JzC
| 41,054
|
Mistral cache v4.42.4
|
{
"login": "92HyungjunOh",
"id": 195714267,
"node_id": "U_kgDOC6pc2w",
"avatar_url": "https://avatars.githubusercontent.com/u/195714267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/92HyungjunOh",
"html_url": "https://github.com/92HyungjunOh",
"followers_url": "https://api.github.com/users/92HyungjunOh/followers",
"following_url": "https://api.github.com/users/92HyungjunOh/following{/other_user}",
"gists_url": "https://api.github.com/users/92HyungjunOh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/92HyungjunOh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/92HyungjunOh/subscriptions",
"organizations_url": "https://api.github.com/users/92HyungjunOh/orgs",
"repos_url": "https://api.github.com/users/92HyungjunOh/repos",
"events_url": "https://api.github.com/users/92HyungjunOh/events{/privacy}",
"received_events_url": "https://api.github.com/users/92HyungjunOh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T12:26:36
| 2025-09-23T03:42:30
| 2025-09-22T12:26:51
|
NONE
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41054",
"html_url": "https://github.com/huggingface/transformers/pull/41054",
"diff_url": "https://github.com/huggingface/transformers/pull/41054.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41054.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "92HyungjunOh",
"id": 195714267,
"node_id": "U_kgDOC6pc2w",
"avatar_url": "https://avatars.githubusercontent.com/u/195714267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/92HyungjunOh",
"html_url": "https://github.com/92HyungjunOh",
"followers_url": "https://api.github.com/users/92HyungjunOh/followers",
"following_url": "https://api.github.com/users/92HyungjunOh/following{/other_user}",
"gists_url": "https://api.github.com/users/92HyungjunOh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/92HyungjunOh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/92HyungjunOh/subscriptions",
"organizations_url": "https://api.github.com/users/92HyungjunOh/orgs",
"repos_url": "https://api.github.com/users/92HyungjunOh/repos",
"events_url": "https://api.github.com/users/92HyungjunOh/events{/privacy}",
"received_events_url": "https://api.github.com/users/92HyungjunOh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41054/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41053
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41053/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41053/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41053/events
|
https://github.com/huggingface/transformers/pull/41053
| 3,440,534,202
|
PR_kwDOCUB6oc6p0eb1
| 41,053
|
Qwen3 moe
|
{
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-22T11:40:27
| 2025-10-20T07:04:30
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41053",
"html_url": "https://github.com/huggingface/transformers/pull/41053",
"diff_url": "https://github.com/huggingface/transformers/pull/41053.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41053.patch",
"merged_at": null
}
|
```python
class Ep2DpParallel(ParallelStyle):
def __init__(self):
super().__init__()
self._num_tokens_to_send = None
self._num_tokens_to_recv = None
self._reshuffle_indices = None
self._reshuffled_counts = None
def _token_dispatch(self, mod, inputs, device_mesh):
routed_input, num_tokens_per_expert = inputs
ep_size = device_mesh.shape[0]
# num_tokens_per_expert is of shape (num_experts, ), where each element holds the amount of tokens for
# the corresponding expert from the local rank
with torch.no_grad():
# we transpose num_tokens_per_expert on device_mesh ep axis, to get the number of tokens for the local rank
# think of all2all as a transpose operation on the device mesh
# grouped_tokens_per_rank is of shape (ep_size * num_experts_per_rank,)
# such as:
# [#tokens for local expert 0 from EP rank 0, #tokens for local expert 1 from EP rank 0, ..., # tokens for local expert n from EP rank 0, ...]
grouped_tokens_per_rank = all_to_all_single(
num_tokens_per_expert,
None,
None,
group=device_mesh.get_group(),
)
# this is of shape (ep_size, )
# [#tokens for rank 0, #tokens for rank 1, ...]
num_tokens_to_send = (
num_tokens_per_expert.view(ep_size, -1)
.sum(dim=1)
.to(torch.device("cpu"), non_blocking=True)
)
# this is of shape (ep_size, )
# [#tokens from rank 0, #tokens from rank 1, ...]
num_tokens_to_recv = (
grouped_tokens_per_rank.view(ep_size, -1)
.sum(dim=1)
.to(torch.device("cpu"), non_blocking=False)
)
self._num_tokens_to_send = num_tokens_to_send.tolist()
self._num_tokens_to_recv = num_tokens_to_recv.tolist()
# perform all-to-all to send the tokens to the right ranks
routed_input = all_to_all_single_autograd(
routed_input,
self._num_tokens_to_recv,
self._num_tokens_to_send,
device_mesh.get_group(),
)
# routed input is not sorted by expert anymore, rather looks like:
# [tokens for local expert 0 from EP rank 0, tokens for local expert 0 from EP rank 1, ..., tokens for local expert 0 from EP rank n, ...]
# this needs to be reshuffled back
# same applies to grouped_tokens_per_rank
# [#tokens for local expert 0 from EP rank 0, #tokens for local expert 0 from EP rank 1, ..., # tokens for local expert 0 from EP rank n, ...]
return routed_input, grouped_tokens_per_rank
@staticmethod
def _partition_fn(name, mod, device_mesh):
# shard on the expert dimension
for name, param in mod.named_parameters(recurse=False):
dist_param = nn.Parameter(distribute_tensor(param, device_mesh, [Shard(0)]))
mod.register_parameter(name, dist_param)
def _token_combine(self, mod, routed_output, device_mesh):
# reverse all-to-all from dispatch
routed_output = all_to_all_single_autograd(
routed_output,
self._num_tokens_to_send,
self._num_tokens_to_recv,
device_mesh.get_group(),
)
return routed_output
def _apply(self, module: nn.Module, device_mesh: DeviceMesh) -> nn.Module:
return distribute_module(
module,
device_mesh,
partition_fn=Ep2DpParallel._partition_fn,
input_fn=self._token_dispatch,
output_fn=self._token_combine,
)
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41053/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41052
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41052/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41052/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41052/events
|
https://github.com/huggingface/transformers/pull/41052
| 3,440,383,222
|
PR_kwDOCUB6oc6pz9Ew
| 41,052
|
Fix seedoss
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T11:03:27
| 2025-09-22T12:54:33
| 2025-09-22T12:54:30
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41052",
"html_url": "https://github.com/huggingface/transformers/pull/41052",
"diff_url": "https://github.com/huggingface/transformers/pull/41052.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41052.patch",
"merged_at": "2025-09-22T12:54:30"
}
|
# What does this PR do?
We get
> tests/models/seed_oss/test_modeling_seed_oss.py::SeedOssIntegrationTest::test_model_36b_fp16 Killed
See [job run](https://github.com/huggingface/transformers/actions/runs/17874224681/job/50833177823)
Running this single test , it pass. When loading `dtype=fp16`, it consumes much more CPU memory , and when running the whole integration test suite, it cause the process being killed.
This PR removes this test and only keep the `bfloat16` one: we don't need to test with different dtypes in general.
Also remove redundant test `test_model_36b_bf16` as it is the same as `test_model_36b_sdpa` (default is `sdpa`)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41052/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41051
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41051/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41051/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41051/events
|
https://github.com/huggingface/transformers/pull/41051
| 3,440,203,832
|
PR_kwDOCUB6oc6pzVf5
| 41,051
|
Minor addition, no split modules for VideoMAEE
|
{
"login": "DuyguA",
"id": 8277232,
"node_id": "MDQ6VXNlcjgyNzcyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8277232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DuyguA",
"html_url": "https://github.com/DuyguA",
"followers_url": "https://api.github.com/users/DuyguA/followers",
"following_url": "https://api.github.com/users/DuyguA/following{/other_user}",
"gists_url": "https://api.github.com/users/DuyguA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DuyguA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DuyguA/subscriptions",
"organizations_url": "https://api.github.com/users/DuyguA/orgs",
"repos_url": "https://api.github.com/users/DuyguA/repos",
"events_url": "https://api.github.com/users/DuyguA/events{/privacy}",
"received_events_url": "https://api.github.com/users/DuyguA/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T10:20:05
| 2025-09-23T17:47:18
| 2025-09-23T09:53:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41051",
"html_url": "https://github.com/huggingface/transformers/pull/41051",
"diff_url": "https://github.com/huggingface/transformers/pull/41051.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41051.patch",
"merged_at": "2025-09-23T09:53:51"
}
|
Minor addition to VideoMAE class for better device mapping, added `no_split_modules`
Fixes #23086
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41051/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41050
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41050/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41050/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41050/events
|
https://github.com/huggingface/transformers/pull/41050
| 3,439,965,002
|
PR_kwDOCUB6oc6pyh5u
| 41,050
|
Cast param to dtype after moving to the right device
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-09-22T09:25:15
| 2025-09-22T09:34:27
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41050",
"html_url": "https://github.com/huggingface/transformers/pull/41050",
"diff_url": "https://github.com/huggingface/transformers/pull/41050.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41050.patch",
"merged_at": null
}
|
# What does this PR do?
As per title
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41050/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41049
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41049/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41049/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41049/events
|
https://github.com/huggingface/transformers/pull/41049
| 3,439,938,838
|
PR_kwDOCUB6oc6pycKx
| 41,049
|
Fix Qwen video tests
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T09:18:24
| 2025-09-22T10:28:12
| 2025-09-22T10:28:12
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41049",
"html_url": "https://github.com/huggingface/transformers/pull/41049",
"diff_url": "https://github.com/huggingface/transformers/pull/41049.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41049.patch",
"merged_at": "2025-09-22T10:28:12"
}
|
# What does this PR do?
As per title, changes long video URLs with a tiny video and adjusts expected values when needed. Also updates the video processor to use `size` instead of deprecated `min/max pixels` parameters
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41049/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41048
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41048/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41048/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41048/events
|
https://github.com/huggingface/transformers/pull/41048
| 3,439,910,287
|
PR_kwDOCUB6oc6pyWDE
| 41,048
|
V4.42.4 to main
|
{
"login": "dongjin-na",
"id": 128335412,
"node_id": "U_kgDOB6Y-NA",
"avatar_url": "https://avatars.githubusercontent.com/u/128335412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dongjin-na",
"html_url": "https://github.com/dongjin-na",
"followers_url": "https://api.github.com/users/dongjin-na/followers",
"following_url": "https://api.github.com/users/dongjin-na/following{/other_user}",
"gists_url": "https://api.github.com/users/dongjin-na/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dongjin-na/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dongjin-na/subscriptions",
"organizations_url": "https://api.github.com/users/dongjin-na/orgs",
"repos_url": "https://api.github.com/users/dongjin-na/repos",
"events_url": "https://api.github.com/users/dongjin-na/events{/privacy}",
"received_events_url": "https://api.github.com/users/dongjin-na/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T09:10:41
| 2025-09-22T09:11:12
| 2025-09-22T09:11:02
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41048",
"html_url": "https://github.com/huggingface/transformers/pull/41048",
"diff_url": "https://github.com/huggingface/transformers/pull/41048.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41048.patch",
"merged_at": null
}
| null |
{
"login": "dongjin-na",
"id": 128335412,
"node_id": "U_kgDOB6Y-NA",
"avatar_url": "https://avatars.githubusercontent.com/u/128335412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dongjin-na",
"html_url": "https://github.com/dongjin-na",
"followers_url": "https://api.github.com/users/dongjin-na/followers",
"following_url": "https://api.github.com/users/dongjin-na/following{/other_user}",
"gists_url": "https://api.github.com/users/dongjin-na/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dongjin-na/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dongjin-na/subscriptions",
"organizations_url": "https://api.github.com/users/dongjin-na/orgs",
"repos_url": "https://api.github.com/users/dongjin-na/repos",
"events_url": "https://api.github.com/users/dongjin-na/events{/privacy}",
"received_events_url": "https://api.github.com/users/dongjin-na/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41048/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41047
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41047/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41047/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41047/events
|
https://github.com/huggingface/transformers/pull/41047
| 3,439,777,360
|
PR_kwDOCUB6oc6px5i7
| 41,047
|
Add write token for uploading benchmark results to the Hub
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T08:36:46
| 2025-09-22T14:13:46
| 2025-09-22T14:13:46
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41047",
"html_url": "https://github.com/huggingface/transformers/pull/41047",
"diff_url": "https://github.com/huggingface/transformers/pull/41047.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41047.patch",
"merged_at": "2025-09-22T14:13:46"
}
|
# What does this PR do?
This PR passes in the correct write token for uploading the benchmarking result to the hub.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41047/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41045
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41045/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41045/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41045/events
|
https://github.com/huggingface/transformers/pull/41045
| 3,439,405,119
|
PR_kwDOCUB6oc6pwpgJ
| 41,045
|
Modify Qwen3Omni parameter name since VL changed it
|
{
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T06:33:25
| 2025-09-22T10:07:42
| 2025-09-22T10:06:59
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41045",
"html_url": "https://github.com/huggingface/transformers/pull/41045",
"diff_url": "https://github.com/huggingface/transformers/pull/41045.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41045.patch",
"merged_at": "2025-09-22T10:06:59"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41045/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41044
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41044/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41044/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41044/events
|
https://github.com/huggingface/transformers/issues/41044
| 3,439,402,186
|
I_kwDOCUB6oc7NARzK
| 41,044
|
[Bug] AutoModelForCausalLM.from_pretrained() causes fatal silent crash on Windows with CUDA 12.9 / RTX 3060 Ti
|
{
"login": "remby83-boop",
"id": 233688204,
"node_id": "U_kgDODe3MjA",
"avatar_url": "https://avatars.githubusercontent.com/u/233688204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remby83-boop",
"html_url": "https://github.com/remby83-boop",
"followers_url": "https://api.github.com/users/remby83-boop/followers",
"following_url": "https://api.github.com/users/remby83-boop/following{/other_user}",
"gists_url": "https://api.github.com/users/remby83-boop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remby83-boop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remby83-boop/subscriptions",
"organizations_url": "https://api.github.com/users/remby83-boop/orgs",
"repos_url": "https://api.github.com/users/remby83-boop/repos",
"events_url": "https://api.github.com/users/remby83-boop/events{/privacy}",
"received_events_url": "https://api.github.com/users/remby83-boop/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T06:32:11
| 2025-10-22T14:23:04
| 2025-10-22T14:23:04
|
NONE
| null | null | null | null |
### System Info
OS: Windows 11
GPU: NVIDIA GeForce RTX 3060 Ti
NVIDIA Driver Version: 5xx.xx (CUDA 12.9)
Python Version: 3.10.x
PyTorch Version: Tested on 2.7.1+cu118, 2.8.0+cu121
Transformers Version: Tested on 4.36.2, 4.30.2
CUDA Toolkit: 12.9
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Create a clean Conda environment: conda create -n test-env python=3.10 -y
2. Activate it: conda activate test-env
3. Install PyTorch and Transformers:
pip install torch==2.8.0+cu121 torchvision==0.16.0+cu121 torchaudio==0.16.0+cu121 --index-url https://download.pytorch.org/whl/cu121
pip install transformers==4.36.2
4. Run the following test script:
# test_crash.py
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
print("Step 1: Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained("gpt2") # or "microsoft/DialoGPT-small"
print("✓ Tokenizer loaded.")
print("Step 2: Loading model...")
model = AutoModelForCausalLM.from_pretrained("gpt2", device_map="auto") # Crash occurs here
print("✓ Model loaded.") # This line is never reached
```
5. Observe the output: The script prints "✓ Tokenizer loaded." and then the process terminates silently back to the command prompt with no error.
Additional Diagnostics & Notes:
Not Model-Specific: The crash occurs with gpt2, microsoft/DialoGPT-small, and microsoft/Phi-3-mini-4k-instruct.
Not Device-Specific: The crash occurs with both device_map="auto" and device_map=None (CPU).
Sequence Classification Models Work: AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english") loads successfully, indicating the issue is isolated to the AutoModelForCausalLM class.
Environment Isolation: The issue was reproduced in a Docker container based on huggingface/transformers-pytorch-gpu:latest and in multiple clean Conda environments, ruling out local configuration issues.
Error Logs: There are no Python-level errors. The crash is silent and fatal, suggesting a segmentation fault or similar low-level issue in a native dependency (likely within the tokenizers or model architecture C++ code)
### Expected behavior
The model should load successfully onto the available device (CPU or GPU) without causing a fatal process crash.
Potential Root Cause:
The issue is likely a memory access violation or segmentation fault occurring in the native code responsible for initializing the model's architecture or loading its weights, specifically triggered by the Windows OS + NVIDIA RTX 3060 Ti + CUDA 12.9 combination.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41044/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41043
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41043/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41043/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41043/events
|
https://github.com/huggingface/transformers/issues/41043
| 3,439,189,676
|
I_kwDOCUB6oc7M_d6s
| 41,043
|
`TFBertForMaskedLM` throws `TypeError: 'builtins.safe_open' object is not iterable`
|
{
"login": "khteh",
"id": 3871483,
"node_id": "MDQ6VXNlcjM4NzE0ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khteh",
"html_url": "https://github.com/khteh",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.github.com/users/khteh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khteh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khteh/subscriptions",
"organizations_url": "https://api.github.com/users/khteh/orgs",
"repos_url": "https://api.github.com/users/khteh/repos",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"received_events_url": "https://api.github.com/users/khteh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-09-22T05:01:03
| 2025-09-22T13:00:43
| 2025-09-22T12:02:14
|
NONE
| null | null | null | null |
### System Info
```
- `transformers` version: 4.56.1
- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.41
- Python version: 3.13.3
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.6.2
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): not installed (NA)
- Tensorflow version (GPU?): 2.20.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
```
model = TFBertForMaskedLM.from_pretrained(MODEL)
File "/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/transformers/modeling_tf_utils.py", line 2964, in from_pretrained
return load_pytorch_state_dict_in_tf2_model(
model,
...<6 lines>...
tf_to_pt_weight_rename=tf_to_pt_weight_rename,
)
File "/home/khteh/.local/share/virtualenvs/pAIthon-GaqEDHQT/lib/python3.13/site-packages/transformers/modeling_tf_pytorch_utils.py", line 333, in load_pytorch_state_dict_in_tf2_model
for key in pt_state_dict:
^^^^^^^^^^^^^
TypeError: 'builtins.safe_open' object is not iterable
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, TFBertForMaskedLM
model = TFBertForMaskedLM.from_pretrained(MODEL)
```
### Expected behavior
NO error
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41043/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41042
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41042/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41042/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41042/events
|
https://github.com/huggingface/transformers/issues/41042
| 3,439,160,066
|
I_kwDOCUB6oc7M_WsC
| 41,042
|
Zero GPU quota doesn't reset
|
{
"login": "EvgenyZaretskiy",
"id": 51313235,
"node_id": "MDQ6VXNlcjUxMzEzMjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/51313235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EvgenyZaretskiy",
"html_url": "https://github.com/EvgenyZaretskiy",
"followers_url": "https://api.github.com/users/EvgenyZaretskiy/followers",
"following_url": "https://api.github.com/users/EvgenyZaretskiy/following{/other_user}",
"gists_url": "https://api.github.com/users/EvgenyZaretskiy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EvgenyZaretskiy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EvgenyZaretskiy/subscriptions",
"organizations_url": "https://api.github.com/users/EvgenyZaretskiy/orgs",
"repos_url": "https://api.github.com/users/EvgenyZaretskiy/repos",
"events_url": "https://api.github.com/users/EvgenyZaretskiy/events{/privacy}",
"received_events_url": "https://api.github.com/users/EvgenyZaretskiy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-22T04:41:27
| 2025-10-22T08:02:16
| null |
NONE
| null | null | null | null |
### System Info
Thats second part for [#3362](https://github.com/huggingface/huggingface_hub/issues/3362).
Still want to know 1) Why somedays quota doesn't reset? (check for 18.09), 2) Why other days it does reset at different upredictable time of day?
I use the script to control status of my account page to retrieve quota. Check the file attached.
[hf-quota.txt](https://github.com/user-attachments/files/22457577/hf-quota.txt)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Control account page during the day and log seconds quota left.
### Expected behavior
Quota resets one a day at predictable time of day.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41042/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41041
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41041/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41041/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41041/events
|
https://github.com/huggingface/transformers/pull/41041
| 3,439,079,601
|
PR_kwDOCUB6oc6pvjmU
| 41,041
|
[WIP] Add YuE model
|
{
"login": "Manalelaidouni",
"id": 25346345,
"node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Manalelaidouni",
"html_url": "https://github.com/Manalelaidouni",
"followers_url": "https://api.github.com/users/Manalelaidouni/followers",
"following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
"gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
"organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
"repos_url": "https://api.github.com/users/Manalelaidouni/repos",
"events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
"received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-09-22T03:52:27
| 2025-09-22T11:43:23
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41041",
"html_url": "https://github.com/huggingface/transformers/pull/41041",
"diff_url": "https://github.com/huggingface/transformers/pull/41041.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41041.patch",
"merged_at": null
}
|
# What does this PR do?
This draft PR aims at integrating the `YuE` model to `transformers`, it's a lyrics to song generation model that takes a lyrics prompt and a reference audio tokenized using a finetuned `Xcodec` model, YuE itself uses a two stage generation based on 2 Llama2 models, toped with the `Vocos` vocoder to enhance the produced song.
**Progress** : the `YuEProcessor` and `YuETokenizer` produce identical results as the original YuE implementation and I'm currently working on the replicating modeling part.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41041/timeline
| null | null | null | null | true
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.