url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/37922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37922/comments | https://api.github.com/repos/huggingface/transformers/issues/37922/events | https://github.com/huggingface/transformers/pull/37922 | 3,035,588,795 | PR_kwDOCUB6oc6UuyPD | 37,922 | [tests] Smaller model in slow cache tests | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-02T09:31:29 | 2025-05-06T10:15:51 | 2025-05-06T10:15:25 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37922",
"html_url": "https://github.com/huggingface/transformers/pull/37922",
"diff_url": "https://github.com/huggingface/transformers/pull/37922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37922.patch",
"merged_at": "2025-05-06T10:15:25"
} | # What does this PR do?
Our CI is failing due to OOM in some slow tests (see `CacheHardIntegrationTest` failures [here](https://github.com/huggingface/transformers/actions/runs/14768710814/job/41465309462)).
This PR replaces the 7B model (requires ~15GB VRAM) with a 4B model (requires ~9GB VRAM) in tests that were using a 7B model. It also makes a few more minor modifications to ensure a green CI (commented in the diff) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37922/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37921/comments | https://api.github.com/repos/huggingface/transformers/issues/37921/events | https://github.com/huggingface/transformers/pull/37921 | 3,035,136,915 | PR_kwDOCUB6oc6UtSns | 37,921 | Fix wrong example in grounding dino | {
"login": "developer0hye",
"id": 35001605,
"node_id": "MDQ6VXNlcjM1MDAxNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/35001605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/developer0hye",
"html_url": "https://github.com/developer0hye",
"followers_url": "https://api.github.com/users/developer0hye/followers",
"following_url": "https://api.github.com/users/developer0hye/following{/other_user}",
"gists_url": "https://api.github.com/users/developer0hye/gists{/gist_id}",
"starred_url": "https://api.github.com/users/developer0hye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/developer0hye/subscriptions",
"organizations_url": "https://api.github.com/users/developer0hye/orgs",
"repos_url": "https://api.github.com/users/developer0hye/repos",
"events_url": "https://api.github.com/users/developer0hye/events{/privacy}",
"received_events_url": "https://api.github.com/users/developer0hye/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-02T04:08:17 | 2025-05-10T01:30:48 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37921",
"html_url": "https://github.com/huggingface/transformers/pull/37921",
"diff_url": "https://github.com/huggingface/transformers/pull/37921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37921.patch",
"merged_at": null
} | # What does this PR do?
Fix wrong example in grounding dino. the more objects can be detected with this modification.
Test Input Image:

With `text_labels` in example, it only can detect cats when using grounding-dino-base,
<img width="881" alt="스크린샷 2025-05-02 112340_bad" src="https://github.com/user-attachments/assets/8ffda0ed-6e9f-427f-b523-eb9ad4227ab3" />
After modification via an [official guide](https://huggingface.co/IDEA-Research/grounding-dino-base#how-to-use), it can detect remote controls also.
<img width="889" alt="스크린샷 2025-05-02 112340_good" src="https://github.com/user-attachments/assets/eaa59255-6f12-4bf9-8204-3b32a21ec62a" />
- https://huggingface.co/IDEA-Research/grounding-dino-base#how-to-use
## Before submitting
- [v] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts, @qubvel
Library:
@zucchini-nlp
Documentation: @stevhliu
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37921/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37920/comments | https://api.github.com/repos/huggingface/transformers/issues/37920/events | https://github.com/huggingface/transformers/pull/37920 | 3,034,465,273 | PR_kwDOCUB6oc6UrE4Y | 37,920 | [core] reuse unused reserved cuda memory when loading models | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T18:48:50 | 2025-05-06T15:20:15 | 2025-05-05T14:14:05 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37920",
"html_url": "https://github.com/huggingface/transformers/pull/37920",
"diff_url": "https://github.com/huggingface/transformers/pull/37920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37920.patch",
"merged_at": "2025-05-05T14:14:05"
} | # What does this PR do?
TL;DR checks for unused reserved CUDA memory before preallocating more memory or deciding to do CPU offload.
Missing: benchmark whether this has a speed impact in `from_pretrained` on e.g. TP
### Context
(first commit containing the issue: https://github.com/huggingface/transformers/pull/36335)
There has been an issue with flaky model tests that is difficult to reproduce, and where resetting cuda memory was helping. E.g. if we remove the `tearDown` function with `torch.cuda.empty_cache` in `CacheHardIntegrationTest`, we might start getting failures (depending on the device).
Tracing down the issue, we can see that repeated `from_pretrained` calls may start offloading the model. More specifically, we can see that
1. the reserved memory grows when we instantiate a second model, even when the first model is no longer actively allocating cuda memory
2. we're triggering CPU offload when there is plenty of memory for the model
On `main` + RTX 4090 (24GB), if we pick a 4B model in BF16 (~33% of device memory), we observe the following (see output below -- notice the CPU offload after the 3rd call):
```py
# How to reproduce the issue: pick a model/GPU/dtype combination such that the model takes >33% memory of the GPU.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Qwen/Qwen3-4B"
def generate():
tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left")
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer(["Here's everything I know about cats. Cats"], return_tensors="pt").to(model.device)
_ = model.generate(**inputs, do_sample=True, max_new_tokens=1, return_dict_in_generate=True, output_scores=True)
print("generate 1")
generate()
print("memory allocated (GB)", torch.cuda.memory_allocated(0) / 1024 ** 3)
print("memory reserved (GB)", torch.cuda.memory_reserved(0) / 1024 ** 3)
print("generate 2")
generate()
print("memory allocated (GB)", torch.cuda.memory_allocated(0) / 1024 ** 3)
print("memory reserved (GB)", torch.cuda.memory_reserved(0) / 1024 ** 3)
print("generate 3")
generate()
print("memory allocated (GB)", torch.cuda.memory_allocated(0) / 1024 ** 3)
print("memory reserved (GB)", torch.cuda.memory_reserved(0) / 1024 ** 3)
print("generate 4")
generate()
print("memory allocated (GB)", torch.cuda.memory_allocated(0) / 1024 ** 3)
print("memory reserved (GB)", torch.cuda.memory_reserved(0) / 1024 ** 3)
```
```
generate 1
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.94it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 8.224609375
generate 2
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.89it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 16.443359375
generate 3
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.00it/s]
Some parameters are on the meta device because they were offloaded to the cpu.
memory allocated (GB) 5.62136697769165
memory reserved (GB) 8.224609375
generate 4
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.17it/s]
memory allocated (GB) 5.62136697769165
memory reserved (GB) 16.443359375
```
### Solution
The solution is quite simple: when warming up memory or deciding whether to do CPU offload, let's check the memory available in the GPU *including* unused reserved memory.
After the fix in this PR, rerunning the script above we get
```
generate 1
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.77it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 8.22265625
generate 2
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.78it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 8.22265625
generate 3
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.92it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 8.22265625
generate 4
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.98it/s]
memory allocated (GB) 0.0079345703125
memory reserved (GB) 8.22265625
```
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37920/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37920/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37919/comments | https://api.github.com/repos/huggingface/transformers/issues/37919/events | https://github.com/huggingface/transformers/pull/37919 | 3,034,233,640 | PR_kwDOCUB6oc6UqTAb | 37,919 | Feat: save_pretrained for tensor parallel (and other parallelisms) models | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T16:37:52 | 2025-06-03T10:16:20 | 2025-05-19T18:16:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37919",
"html_url": "https://github.com/huggingface/transformers/pull/37919",
"diff_url": "https://github.com/huggingface/transformers/pull/37919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37919.patch",
"merged_at": "2025-05-19T18:16:22"
} | Save_pretrained that works on models that tensor parallelism was applied to. Works both on local tensors and Dtensors.

Memory snapshot dump timeline, `meta-llama/Meta-Llama-3-8B-Instruct` in fp32 on 2 GPUs. The spikes on top represent the saving, probably can't get any better than that. We can maybe warn users that they can specify a smaller shard size to avoid memory spikes.
Relies on a small fix in Huggingface_hub: https://github.com/huggingface/huggingface_hub/pull/3042
EDIT: now also supports `local_*` tp plans. Was tested on saving full llama4 model and comparing correctness (not added into tests as the model is huge, will probably create tests for this when we support user defined tp_plans)
FIXES: #36436
| {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37919/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37918/comments | https://api.github.com/repos/huggingface/transformers/issues/37918/events | https://github.com/huggingface/transformers/issues/37918 | 3,034,167,505 | I_kwDOCUB6oc602bjR | 37,918 | Inconsistent shape of logits in `GenerateBeamDecoderOnlyOutput` | {
"login": "kurzdev",
"id": 29333826,
"node_id": "MDQ6VXNlcjI5MzMzODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/29333826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kurzdev",
"html_url": "https://github.com/kurzdev",
"followers_url": "https://api.github.com/users/kurzdev/followers",
"following_url": "https://api.github.com/users/kurzdev/following{/other_user}",
"gists_url": "https://api.github.com/users/kurzdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kurzdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kurzdev/subscriptions",
"organizations_url": "https://api.github.com/users/kurzdev/orgs",
"repos_url": "https://api.github.com/users/kurzdev/repos",
"events_url": "https://api.github.com/users/kurzdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/kurzdev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T16:01:19 | 2025-05-02T15:38:52 | 2025-05-02T15:38:36 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.48.3
- Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes, across 4 GPUs with `accelerate launch --multi-gpu`
- Using GPU in script?: yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@zucchini-nlp @stevhliu
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```py
import torch
from torch.utils.data import DataLoader
from transformers import AutoProcessor, LlavaNextForConditionalGeneration
BATCH_SIZE = 4
NUM_BEAMS = 5
model = LlavaNextForConditionalGeneration.from_pretrained(
"llava-hf/llama3-llava-next-8b-hf",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
attn_implementation="flash_attention_2",
)
processor = AutoProcessor.from_pretrained(
"llava-hf/llama3-llava-next-8b-hf",
model_max_length=4096, # https://llava-vl.github.io/blog/2024-04-30-llava-next-video/
)
dataset = ... # Some custom VQAv2 dataset
loader = DataLoader(
dataset,
batch_size=BATCH_SIZE,
num_workers=2,
pin_memory=True,
shuffle=False,
)
for batch in loader:
features = processor(
images=batch["image"],
text=batch["prompt"],
return_tensors="pt",
padding="max_length",
truncation=True
)
outputs = model.generate(
max_new_tokens=128,
num_beams=NUM_BEAMS,
length_penalty=0,
input_ids=features["input_ids"],
attention_mask=features["attention_mask"],
pixel_values=features["pixel_values"],
image_sizes=features["image_sizes"]
pad_token_id=processor.tokenizer.eos_token_id,
return_dict_in_generate=True,
output_logits=True
)
stacked_logits = torch.stack(outputs.logits)
print(stacked_logits.shape) # Should be (seq_length, batch_size, vocab_size), is (seq_length, batch_size * num_beams, vocab_size)
```
### Expected behavior
I am using `llava-hf/llama3-llava-next-8b-hf` for generation on [VQAv2](https://visualqa.org/download.html). When generating in batches, the actual shape of the logits is `(seq_length, batch_size * num_beams, vocab_size)` instead of `(seq_length, batch_size, vocab_size)` as is stated [in the documentation](https://huggingface.co/docs/transformers/en/internal/generation_utils#transformers.generation.GenerateBeamDecoderOnlyOutput).
So for greedy decoding (which is equivalent to `num_beams=1`) and a batch size of 4, I get logits of shape `(n, 4, m)`, whereas for `num_beams=5`, I get `(n, 20, m)`.
I think this is because for each beam and element in the batch, a separate tensor is added to `raw_logits` in `_beam_search` within the `GenerationMixin`:
https://github.com/huggingface/transformers/blob/v4.51.3/src/transformers/generation/utils.py#L3915-L3928
I'm not sure if this is a bug in the implementation (which I'd assume) or an error in the documentation, hence the double-ping.
Either way, my goal is to obtain the logits for the actually generated sequence out of all beams. So if this is just an inconsistency in the documentation, I'd really appreciate it if you could point me in the direction of how to use the resulting shape to obtain the logits for the actually generated sequence per batch! | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37918/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37917/comments | https://api.github.com/repos/huggingface/transformers/issues/37917/events | https://github.com/huggingface/transformers/pull/37917 | 3,034,152,874 | PR_kwDOCUB6oc6UqBoZ | 37,917 | support MiniCPM-o2.6 | {
"login": "tc-mb",
"id": 157115220,
"node_id": "U_kgDOCV1jVA",
"avatar_url": "https://avatars.githubusercontent.com/u/157115220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tc-mb",
"html_url": "https://github.com/tc-mb",
"followers_url": "https://api.github.com/users/tc-mb/followers",
"following_url": "https://api.github.com/users/tc-mb/following{/other_user}",
"gists_url": "https://api.github.com/users/tc-mb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tc-mb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tc-mb/subscriptions",
"organizations_url": "https://api.github.com/users/tc-mb/orgs",
"repos_url": "https://api.github.com/users/tc-mb/repos",
"events_url": "https://api.github.com/users/tc-mb/events{/privacy}",
"received_events_url": "https://api.github.com/users/tc-mb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-05-01T15:53:28 | 2025-09-12T06:57:52 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37917",
"html_url": "https://github.com/huggingface/transformers/pull/37917",
"diff_url": "https://github.com/huggingface/transformers/pull/37917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37917.patch",
"merged_at": null
} | support Minicpm-o2.6 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37917/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37917/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37916/comments | https://api.github.com/repos/huggingface/transformers/issues/37916/events | https://github.com/huggingface/transformers/pull/37916 | 3,034,007,542 | PR_kwDOCUB6oc6Upi7k | 37,916 | Fix device mismatch by moving num_items_in_batch to loss device in fixed_cross_entropy (#37886) | {
"login": "NEREUScode",
"id": 174478950,
"node_id": "U_kgDOCmZWZg",
"avatar_url": "https://avatars.githubusercontent.com/u/174478950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NEREUScode",
"html_url": "https://github.com/NEREUScode",
"followers_url": "https://api.github.com/users/NEREUScode/followers",
"following_url": "https://api.github.com/users/NEREUScode/following{/other_user}",
"gists_url": "https://api.github.com/users/NEREUScode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NEREUScode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NEREUScode/subscriptions",
"organizations_url": "https://api.github.com/users/NEREUScode/orgs",
"repos_url": "https://api.github.com/users/NEREUScode/repos",
"events_url": "https://api.github.com/users/NEREUScode/events{/privacy}",
"received_events_url": "https://api.github.com/users/NEREUScode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T14:37:38 | 2025-05-08T13:41:59 | 2025-05-08T13:41:59 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37916",
"html_url": "https://github.com/huggingface/transformers/pull/37916",
"diff_url": "https://github.com/huggingface/transformers/pull/37916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37916.patch",
"merged_at": null
} | Fixes: https://github.com/huggingface/transformers/issues/37886
This PR ensures that the num_items_in_batch tensor is moved to the same device as the loss tensor before performing division inside the fixed_cross_entropy function. This prevents runtime device mismatch errors when models are trained on non-default devices (e.g., CUDA).
🔧 Changes made:
Updated fixed_cross_entropy to move num_items_in_batch to loss.device before division when reduction is set to 'sum'.
This fix is particularly relevant for ForCausalLMLoss, where num_items_in_batch may be on a different device than logits or loss. | {
"login": "NEREUScode",
"id": 174478950,
"node_id": "U_kgDOCmZWZg",
"avatar_url": "https://avatars.githubusercontent.com/u/174478950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NEREUScode",
"html_url": "https://github.com/NEREUScode",
"followers_url": "https://api.github.com/users/NEREUScode/followers",
"following_url": "https://api.github.com/users/NEREUScode/following{/other_user}",
"gists_url": "https://api.github.com/users/NEREUScode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NEREUScode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NEREUScode/subscriptions",
"organizations_url": "https://api.github.com/users/NEREUScode/orgs",
"repos_url": "https://api.github.com/users/NEREUScode/repos",
"events_url": "https://api.github.com/users/NEREUScode/events{/privacy}",
"received_events_url": "https://api.github.com/users/NEREUScode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37916/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37916/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37915/comments | https://api.github.com/repos/huggingface/transformers/issues/37915/events | https://github.com/huggingface/transformers/pull/37915 | 3,033,832,902 | PR_kwDOCUB6oc6Uo884 | 37,915 | [transformers x vLLM] standardize processors | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T13:04:42 | 2025-05-27T09:30:30 | 2025-05-27T09:30:30 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37915",
"html_url": "https://github.com/huggingface/transformers/pull/37915",
"diff_url": "https://github.com/huggingface/transformers/pull/37915.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37915.patch",
"merged_at": "2025-05-27T09:30:30"
} | # What does this PR do?
Part of https://github.com/huggingface/transformers/issues/37780. The design was tested on different model types:
Oke, I verified that the inference works for all models, unless I forgot about some new ones. Here is the list I tested. A few models (blip2, gotOcr, gemma3) won't be supported in the first release. Gemma3 is already planned after we merge first version of integration, it requires bigger changes for us to make bidirectional attention with `token_type_ids`
```python
model_example_map = {
"aria": run_aria,
"aya_vision": run_aya_vision,
"chameleon": run_chameleon, # NOTE: ready but needs to add suppress token in hub saved generation config
"emu3": run_emu,
"fuyu": run_fuyu, # Almost there, needs new attn interface for Persimmon LM backend in new PR
"got_ocr": run_got_ocr, # More complex as it needs to add boxes/etc. Might support later
"idefics3": run_idefics3,
"internvl_chat": run_internvl,
"llava": run_llava,
"pixtral": run_pixtral,
"llava_next": run_llava_next,
"llava_onevision": run_llava_onevision,
"mllama": run_mllama, # Cross attn not yet supported
"mistral3": run_mistral3,
"paligemma": run_paligemma,
"paligemma2": run_paligemma2,
"qwen2_vl": run_qwen2_vl,
"qwen2_5_vl": run_qwen2_5_vl,
"vipllava": run_vipllava,
}
```
I will do a subsequent PR with the rest of changes for modeling code. That's pretty much all left | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37915/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37914/comments | https://api.github.com/repos/huggingface/transformers/issues/37914/events | https://github.com/huggingface/transformers/issues/37914 | 3,033,676,923 | I_kwDOCUB6oc600jx7 | 37,914 | Training Qwen2.5 VL with dynamic image size using more balanced Sampler for each GPU mem usage | {
"login": "OpenJarvisAI",
"id": 136460643,
"node_id": "U_kgDOCCI5Yw",
"avatar_url": "https://avatars.githubusercontent.com/u/136460643?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OpenJarvisAI",
"html_url": "https://github.com/OpenJarvisAI",
"followers_url": "https://api.github.com/users/OpenJarvisAI/followers",
"following_url": "https://api.github.com/users/OpenJarvisAI/following{/other_user}",
"gists_url": "https://api.github.com/users/OpenJarvisAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OpenJarvisAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OpenJarvisAI/subscriptions",
"organizations_url": "https://api.github.com/users/OpenJarvisAI/orgs",
"repos_url": "https://api.github.com/users/OpenJarvisAI/repos",
"events_url": "https://api.github.com/users/OpenJarvisAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/OpenJarvisAI/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-05-01T11:34:46 | 2025-05-04T03:36:25 | null | NONE | null | null | null | null | ### Feature request
HI, currently training Qwen2.5 VL with peft, the dataloader using lazy load.
But since Qwen2.5 VL uses dynamic input size, each training sample actually will have very divesity training length (very image tokens are not same)
This make training very tricky, the GPU mem usage extremly imbalanced.
### Motivation
Is there a way to support it?
### Your contribution
Is there a way to support it? | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37914/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37913/comments | https://api.github.com/repos/huggingface/transformers/issues/37913/events | https://github.com/huggingface/transformers/pull/37913 | 3,033,586,105 | PR_kwDOCUB6oc6UoHlg | 37,913 | [tests] fix `test_cache_copy` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T10:18:55 | 2025-05-01T14:11:13 | 2025-05-01T14:11:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37913",
"html_url": "https://github.com/huggingface/transformers/pull/37913",
"diff_url": "https://github.com/huggingface/transformers/pull/37913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37913.patch",
"merged_at": null
} | # What does this PR do?
[Failure reported recently](https://huggingface.co/datasets/hf-internal-testing/transformers_daily_ci/raw/dbea6f08c0ef4104bcdb1e405085f40354a006c1/2025-05-01/ci_results_run_models_gpu/new_model_failures_with_bad_commit_grouped_by_authors.json).
👀 `test_cache_copy` has a different generation output depending on whether we run it with `RUN_SLOW=1 py.test tests/utils/test_cache_utils.py` (run all cache tests) or `RUN_SLOW=1 py.test tests/utils/test_cache_utils.py -k test_cache_copy` (run the test in isolation).
This means there is some stateful effect changing the generation output. I'll keep in mind for future occurrences but, since the purpose of this test is not the generation output (but rather the correctness of `copy.deepcopy(prompt_cache)`), the test is modified to output fewer tokens, which no longer triggers the issue. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37913/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37912/comments | https://api.github.com/repos/huggingface/transformers/issues/37912/events | https://github.com/huggingface/transformers/issues/37912 | 3,033,583,107 | I_kwDOCUB6oc600M4D | 37,912 | maybe a bug on phi3 model after refactor or not ? | {
"login": "Onverra-sudo",
"id": 207627867,
"node_id": "U_kgDODGAmWw",
"avatar_url": "https://avatars.githubusercontent.com/u/207627867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Onverra-sudo",
"html_url": "https://github.com/Onverra-sudo",
"followers_url": "https://api.github.com/users/Onverra-sudo/followers",
"following_url": "https://api.github.com/users/Onverra-sudo/following{/other_user}",
"gists_url": "https://api.github.com/users/Onverra-sudo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Onverra-sudo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Onverra-sudo/subscriptions",
"organizations_url": "https://api.github.com/users/Onverra-sudo/orgs",
"repos_url": "https://api.github.com/users/Onverra-sudo/repos",
"events_url": "https://api.github.com/users/Onverra-sudo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Onverra-sudo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T10:16:25 | 2025-05-01T15:44:15 | 2025-05-01T15:08:42 | NONE | null | null | null | null | ### System Info
for your information after the refactor https://github.com/huggingface/transformers/commit/2c47618c1a282f925446506d53108dc6e82d9ef0
the omnigen node for comfui is broken.
https://github.com/set-soft/ComfyUI_OmniGen_Nodes
I patch manualy transformer to restor the old phi3 model
[transformers_patch_phi3old.zip](https://github.com/user-attachments/files/19998821/transformers_patch_phi3old.zip)
But it's not a good solution.
example of bug with the new phi3 model from refactor https://github.com/1038lab/ComfyUI-OmniGen/issues/37#issuecomment-2803268979 in forward function.
Can you explain to me how I can update the omnigen to module with the now Phi3 model after transformers refactor ?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
install comfyui https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file
install omnigen module https://github.com/set-soft/ComfyUI_OmniGen_Nodes
lanch omnigen module with 4.51.3
### Expected behavior
no error on
File "D:\tools\ai\pinokio\api\comfy.git\app\custom_nodes\comfyui-omnigen\OmniGen\transformer.py", line 157, in forward
layer_outputs = decoder_layer(
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\transformers\models\phi3\modeling_phi3.py", line 295, in forward
hidden_states, self_attn_weights = self.self_attn(
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tools\ai\pinokio\api\comfy.git\app\env\lib\site-packages\transformers\models\phi3\modeling_phi3.py", line 189, in forward
cos, sin = position_embeddings
TypeError: cannot unpack non-iterable NoneType object | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37912/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37911/comments | https://api.github.com/repos/huggingface/transformers/issues/37911/events | https://github.com/huggingface/transformers/pull/37911 | 3,033,554,344 | PR_kwDOCUB6oc6UoA1J | 37,911 | [tests] remove `test_sdpa_equivalence` (redundant) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T09:54:59 | 2025-05-16T17:37:31 | 2025-05-16T17:37:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37911",
"html_url": "https://github.com/huggingface/transformers/pull/37911",
"diff_url": "https://github.com/huggingface/transformers/pull/37911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37911.patch",
"merged_at": "2025-05-16T17:37:28"
} | # What does this PR do?
Was checking our [daily failed tests assigned to me from a few days ago](https://huggingface.co/datasets/hf-internal-testing/transformers_daily_ci/raw/85eefceda5a3c3c4949209389bcd719670188cf8/2025-04-30/ci_results_run_models_gpu/new_model_failures_with_bad_commit_grouped_by_authors.json), and found this one: `tests/models/helium/test_modeling_helium.py::HeliumModelTest::test_sdpa_equivalence`. It failed because the tolerance margin is not adequate for `fp16`.
Upon inspection, this test is a model-level test (not a mixin test) that exists on a few models. It is also a shorter version of an existing mixin test, `test_eager_matches_sdpa_inference`. The mixin test checks multiple `dtypes`, flags, etc. As such, `test_sdpa_equivalence` is a redundant test, and it is removed in this PR. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37911/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37910/comments | https://api.github.com/repos/huggingface/transformers/issues/37910/events | https://github.com/huggingface/transformers/pull/37910 | 3,033,252,923 | PR_kwDOCUB6oc6UnArm | 37,910 | Fix typos in strings and comments | {
"login": "co63oc",
"id": 4617245,
"node_id": "MDQ6VXNlcjQ2MTcyNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4617245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/co63oc",
"html_url": "https://github.com/co63oc",
"followers_url": "https://api.github.com/users/co63oc/followers",
"following_url": "https://api.github.com/users/co63oc/following{/other_user}",
"gists_url": "https://api.github.com/users/co63oc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/co63oc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/co63oc/subscriptions",
"organizations_url": "https://api.github.com/users/co63oc/orgs",
"repos_url": "https://api.github.com/users/co63oc/repos",
"events_url": "https://api.github.com/users/co63oc/events{/privacy}",
"received_events_url": "https://api.github.com/users/co63oc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T06:19:05 | 2025-05-15T05:29:07 | 2025-05-01T13:58:58 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37910",
"html_url": "https://github.com/huggingface/transformers/pull/37910",
"diff_url": "https://github.com/huggingface/transformers/pull/37910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37910.patch",
"merged_at": "2025-05-01T13:58:58"
} | # What does this PR do?
Fix typos in strings and comments found by codespell.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37910/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37909/comments | https://api.github.com/repos/huggingface/transformers/issues/37909/events | https://github.com/huggingface/transformers/issues/37909 | 3,033,233,069 | I_kwDOCUB6oc60y3at | 37,909 | `Mask2Former`: Several typos and unused (may unexpected) function parameters. | {
"login": "Kamichanw",
"id": 13182866,
"node_id": "MDQ6VXNlcjEzMTgyODY2",
"avatar_url": "https://avatars.githubusercontent.com/u/13182866?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kamichanw",
"html_url": "https://github.com/Kamichanw",
"followers_url": "https://api.github.com/users/Kamichanw/followers",
"following_url": "https://api.github.com/users/Kamichanw/following{/other_user}",
"gists_url": "https://api.github.com/users/Kamichanw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kamichanw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kamichanw/subscriptions",
"organizations_url": "https://api.github.com/users/Kamichanw/orgs",
"repos_url": "https://api.github.com/users/Kamichanw/repos",
"events_url": "https://api.github.com/users/Kamichanw/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kamichanw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T05:59:42 | 2025-05-05T18:00:51 | 2025-05-05T18:00:51 | NONE | null | null | null | null | ### System Info
The newest version.
### Who can help?
@amyeroberts, @qubvel @stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. The type annotation `Dict(str, Tensor)` is weird and may be a typo:
https://github.com/huggingface/transformers/blob/7a3e208892c06a5e278144eaf38c8599a42f53e7/src/transformers/models/mask2former/modeling_mask2former.py#L2362
2. `pixel_mask` is actually unused, which is contradictory to its definition:
https://github.com/huggingface/transformers/blob/7a3e208892c06a5e278144eaf38c8599a42f53e7/src/transformers/models/mask2former/modeling_mask2former.py#L2224-L2309
3. Can @stevhliu further explain what is `class_labels` and `mask_labels` here? The current explanation may make readers confused.
https://github.com/huggingface/transformers/blob/7a3e208892c06a5e278144eaf38c8599a42f53e7/src/transformers/models/mask2former/modeling_mask2former.py#L2383-L2387
### Expected behavior
1. The correct annotation should be `Dict[str, Tensor]`
2. If `pixel_mask` is useless, it should be removed. | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37909/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37908/comments | https://api.github.com/repos/huggingface/transformers/issues/37908/events | https://github.com/huggingface/transformers/issues/37908 | 3,033,227,812 | I_kwDOCUB6oc60y2Ik | 37,908 | DynamicCache results in too many torch recompiles after 4.51 | {
"login": "flishwang",
"id": 8001982,
"node_id": "MDQ6VXNlcjgwMDE5ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8001982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flishwang",
"html_url": "https://github.com/flishwang",
"followers_url": "https://api.github.com/users/flishwang/followers",
"following_url": "https://api.github.com/users/flishwang/following{/other_user}",
"gists_url": "https://api.github.com/users/flishwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flishwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flishwang/subscriptions",
"organizations_url": "https://api.github.com/users/flishwang/orgs",
"repos_url": "https://api.github.com/users/flishwang/repos",
"events_url": "https://api.github.com/users/flishwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/flishwang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T05:54:10 | 2025-07-21T08:04:16 | 2025-07-21T08:04:16 | NONE | null | null | null | null | ### System Info
accelerate=1.6.0, OS=ubuntu 22.04, numpy=1.26.4, torch=2.6.0+cu124, python=3.10
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForCausalLM
sequence_label_locs = torch.as_tensor([,-40,-20,-10,-1]).cuda()
level_ids_gpu = torch.as_tensor([76,77,78,79,80]).cuda()
cache = None
cls_ids_gpu = torch.as_tensor([103,130,1166,1366,3366]).cuda()
model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen2-1.5B-Base').half().cuda()
model = torch.compile(model)
batch = torch.randint(low=0,high=100000,size=(8,4096)).cuda()
mask = torch.ones((8,4096),dtype=torch.int64).cuda()
loc_stride=256
@torch.no_grad()
def model_predict(model, batch, mask,sequence_label_locs, level_ids_gpu, cls_ids_gpu, cache):
model_predict_argmax = None
past_kv_cache = cache
predicts = []
non_zero_loc = mask.argmax(-1).min() // loc_stride * loc_stride
batch=batch[:,non_zero_loc:]
mask=mask[:,non_zero_loc:]
sequence_label_locs = sequence_label_locs - non_zero_loc
for idx in range(len(sequence_label_locs)):
if idx == 0:
input_ids = batch[:,:sequence_label_locs[idx]]
else:
input_ids = torch.cat([model_predict_argmax,
batch[:,sequence_label_locs[idx-1]+1:sequence_label_locs[idx]]],1)
input_mask = mask[:,:sequence_label_locs[idx]]
outputs = model(input_ids,attention_mask = input_mask,past_key_values = past_kv_cache, use_cache=True,
logits_to_keep = 1,
return_dict = True)
last_predicts = outputs.logits[:,-1:,:]
predicts.append(last_predicts)
past_kv_cache = outputs.past_key_values
if idx == 0:
model_predict_argmax = cls_ids_gpu[last_predicts[:,:,cls_ids_gpu].argmax(-1)]
else:
model_predict_argmax = level_ids_gpu[last_predicts[:, :, level_ids_gpu].argmax(-1)]
return torch.cat(predicts,1)
for i in range(0,4000,10):
mask[:,:i]=0
outs = model_predict(model, batch, mask,sequence_label_locs, level_ids_gpu, cls_ids_gpu, cache)
```
### Expected behavior
Run the upper scripts with TORCH_LOGS=recompiles
If I use transformer==4.49, only several recompiles happes.
But If I use transformers>=4.51, re-compiling always happens if I changed the input sequences' length.
I notice that it seems to be related to the following code:
https://github.com/huggingface/transformers/blob/7a3e208892c06a5e278144eaf38c8599a42f53e7/src/transformers/cache_utils.py#L442
```
not self.key_cache[layer_idx].numel() # prefers not t.numel() to len(t) == 0 to export the model
```
which in transformers<=4.49 is
```
not len(self.key_cache[layer_idx])
```
If I modified the code to
```
not self.key_cache[layer_idx].shape[0]
```
, the re-compiling times could also be reduced.
I'm not expert for torch/transformers, and I'm not sure which project (pytorch or transformers?) this bug belongs to.
I'm also not sure if the proposed modification would break the model export procedure. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37908/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37907/comments | https://api.github.com/repos/huggingface/transformers/issues/37907/events | https://github.com/huggingface/transformers/issues/37907 | 3,033,189,265 | I_kwDOCUB6oc60ysuR | 37,907 | `RuntimeError` in `Siglip2Model` Attention with NaFlex when `actual_patches != max_num_patches` | {
"login": "Enferlain",
"id": 15861396,
"node_id": "MDQ6VXNlcjE1ODYxMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/15861396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Enferlain",
"html_url": "https://github.com/Enferlain",
"followers_url": "https://api.github.com/users/Enferlain/followers",
"following_url": "https://api.github.com/users/Enferlain/following{/other_user}",
"gists_url": "https://api.github.com/users/Enferlain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Enferlain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Enferlain/subscriptions",
"organizations_url": "https://api.github.com/users/Enferlain/orgs",
"repos_url": "https://api.github.com/users/Enferlain/repos",
"events_url": "https://api.github.com/users/Enferlain/events{/privacy}",
"received_events_url": "https://api.github.com/users/Enferlain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T05:13:04 | 2025-05-01T09:27:55 | 2025-05-01T09:27:47 | NONE | null | null | null | null | ### System Info
* Transformers version: 4.52.0.dev0 but same occurs on 4.51.3
* Platform: Win 11
* Python version: 3.11.5
* PyTorch version: 2.6.0+cu126
* Using GPU? Yes/No: Yes
* GPU type: NVIDIA RTX 3090
* CUDA/cuDNN version: cuda 12.6, cudnn 9.8
* Using `accelerate`? No
### Issue description
When using `Siglip2Model` with the `Siglip2ImageProcessorFast` for NaFlex processing (`google/siglip2-so400m-patch16-naflex`), a `RuntimeError` occurs inside `torch.nn.functional.scaled_dot_product_attention` if the actual number of patches derived from the image and `spatial_shapes` is *different* from the `max_num_patches` value used during processing.
The error seems to stem from an inconsistency between the tensors passed to the model's attention layers:
- The `hidden_states` input to the encoder layers correctly reflects the *actual* number of patches (`L_actual`, derived from `spatial_shapes`).
- However, the `attention_mask` passed alongside it is derived from the mask returned by `Siglip2ImageProcessorFast`, which appears to be padded/truncated to `max_num_patches` (`L_max`).
- This leads to a shape mismatch inside `scaled_dot_product_attention`, e.g., `Target sizes: [B, H, L_actual, L_actual]. Tensor sizes: [B, 1, L_mask, L_mask]`, where `L_actual != L_mask`.
I encountered it after updating either torch/cuda (e.g., 2.6.0+cu126 tested, don't recall what version I was on, maybe cu124) on an identical dataset with the same processing approach that worked fine previously. Seems like it might have been silently ignored or handled differently in older versions. The error occurs regardless of whether `max_num_patches` is set higher (e.g., 2048) or lower (e.g., 1024) than the actual number of patches generated for a given image.
**Traceback (Original error with `max_num_patches=1024`):**
```
Selected Model: google/siglip2-so400m-patch16-naflex (hf type)
Selected Preprocessing Mode: naflex_resize
(Using HF Processor logic with target max_num_patches=1024)
Embeddings will be saved in: data\siglip2_so400m_patch16_naflex_Naflex_Proc1024
Found source subfolders: ['0', '1']
Processing: ./anatomy\0 -> data\siglip2_so400m_patch16_naflex_Naflex_Proc1024\0
Folder '0': 0%| | 0/1500 [00:00<?, ?image/s]D:\CityClassifiers\generate_embeddings.py:270: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
model_call_kwargs = {"pixel_values": pixel_values.to(device=device, dtype=dtype), "attention_mask": attention_mask.to(device=device), "spatial_shapes": torch.tensor(spatial_shapes, dtype=torch.long).to(device=device)}
DEBUG: After Embeddings - hidden_states shape: torch.Size([1, 12288, 1152])
DEBUG: After Embeddings - spatial_shapes: tensor([[ 96, 128]], device='cuda:0')
DEBUG: After Embeddings - attention_mask shape: torch.Size([1, 1024])
DEBUG: Start of Encoder - hidden_states shape: torch.Size([1, 12288, 1152])
DEBUG: Start of Encoder - attention_mask shape: torch.Size([1, 1, 1024, 1024])
DEBUG: Layer 0 - After LayerNorm1 - hidden_states shape: torch.Size([1, 12288, 1152])
DEBUG: Layer 0 - Before SelfAttn - hidden_states shape: torch.Size([1, 12288, 1152])
DEBUG: Layer 0 - Before SelfAttn - attention_mask shape: torch.Size([1, 1, 1024, 1024])
DEBUG: Attention - Query shape: torch.Size([1, 16, 12288, 72])
DEBUG: Attention - Key shape: torch.Size([1, 16, 12288, 72])
DEBUG: Attention - Value shape: torch.Size([1, 16, 12288, 72])
DEBUG: Attention - Mask shape fed to interface: torch.Size([1, 1, 1024, 1024])
Error during get_embedding (v4.3.0) for 000-01-noob9-0.658.png (Mode: naflex_resize, Type: hf):
Traceback (most recent call last):
File "D:\CityClassifiers\generate_embeddings.py", line 273, in get_embedding
if vision_model_component: emb = vision_model_component(**model_call_kwargs).pooler_output
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\utils\generic.py", line 965, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\models\siglip2\modeling_siglip2.py", line 573, in forward
encoder_outputs: BaseModelOutput = self.encoder(
^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\utils\generic.py", line 965, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\models\siglip2\modeling_siglip2.py", line 489, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\models\siglip2\modeling_siglip2.py", line 400, in forward
hidden_states, attn_weights = self.self_attn(
^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\models\siglip2\modeling_siglip2.py", line 331, in forward
attn_output, attn_weights = attention_interface(
^^^^^^^^^^^^^^^^^^^^
File "D:\CityClassifiers\venv\Lib\site-packages\transformers\integrations\sdpa_attention.py", line 54, in sdpa_attention_forward
attn_output = torch.nn.functional.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: The expanded size of the tensor (12288) must match the existing size (1024) at non-singleton dimension 3. Target sizes: [1, 16, 12288, 12288]. Tensor sizes: [1, 1, 1024, 1024]
```
**Example after some attempts at workarounds, still `max_num_patches=1024`):**
```
RuntimeError: The expanded size of the tensor (1024) must match the existing size (1008) at non-singleton dimension 3. Target sizes: [1, 16, 1024, 1024]. Tensor sizes: [1, 1, 1008, 1008]
Traceback (most recent call last):
... (Traceback points through Siglip2VisionModel -> Siglip2VisionTransformer -> Siglip2Encoder -> Siglip2EncoderLayer -> Siglip2Attention -> attention_interface -> scaled_dot_product_attention) ...
File ".../transformers/models/siglip2/modeling_siglip2.py", line 331, in forward
attn_output, attn_weights = attention_interface( ... attention_mask=mask_shape_1008x1008 ... )
File ".../transformers/integrations/sdpa_attention.py", line 54, in sdpa_attention_forward
attn_output = torch.nn.functional.scaled_dot_product_attention( # Expects mask compatible with Q/K/V derived from L=1024 hidden_state?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
*(The exact target/tensor sizes in the error can vary slightly depending on where the check fails, but the core mismatch between `L_actual` and `L_mask` persists).*
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**To Reproduce**
Steps to reproduce the behavior:
1. Use a recent PyTorch version (I tried 2.6.0+cu126) with CUDA enabled.
2. Load the `google/siglip2-so400m-patch16-naflex` model and processor.
3. Choose an input image that, when processed with a specific `max_num_patches` (e.g., 1024), results in `spatial_shapes` implying an `L_actual` different from `max_num_patches`. (Example image dimensions causing `L_actual=1008` when `max_num_patches=1024` might be needed, e.g., `336x768`).
4. Call the processor: `inputs = processor(images=[img], return_tensors="pt", max_num_patches=1024)`
5. Prepare inputs for the model, **using the `attention_mask` directly from the `inputs` dictionary**:
```python
pixel_values = inputs["pixel_values"].to(device, dtype)
attention_mask = inputs["pixel_attention_mask"].to(device) # Shape [1, 1024]
spatial_shapes = inputs["spatial_shapes"].to(device) # Shape [[L_h, L_w]] implying L_actual=1008
model_inputs = {"pixel_values": pixel_values, "attention_mask": attention_mask, "spatial_shapes": spatial_shapes.long()}
```
6. Call the model's vision component: `outputs = model.vision_model(**model_inputs)`
7. Observe the `RuntimeError` during the forward pass.
### Expected behavior
**Expected behavior**
The `Siglip2VisionTransformer` should ideally receive consistent inputs or internally handle the discrepancy between the sequence length implied by `spatial_shapes` (used for hidden states) and the length of the `attention_mask` provided by the processor. Either the processor should return an unpadded mask matching `spatial_shapes`, or the model should use `spatial_shapes` to correctly slice/interpret the padded mask.
**Temporary Workaround I tried:**
A workaround involves ignoring the `attention_mask` from the processor, calculating `L_actual` from `spatial_shapes`, creating a new `correct_attention_mask` of shape `[B, L_actual]`, un-padding `pixel_values` to length `L_actual`, and passing these consistent tensors to the model. | {
"login": "Enferlain",
"id": 15861396,
"node_id": "MDQ6VXNlcjE1ODYxMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/15861396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Enferlain",
"html_url": "https://github.com/Enferlain",
"followers_url": "https://api.github.com/users/Enferlain/followers",
"following_url": "https://api.github.com/users/Enferlain/following{/other_user}",
"gists_url": "https://api.github.com/users/Enferlain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Enferlain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Enferlain/subscriptions",
"organizations_url": "https://api.github.com/users/Enferlain/orgs",
"repos_url": "https://api.github.com/users/Enferlain/repos",
"events_url": "https://api.github.com/users/Enferlain/events{/privacy}",
"received_events_url": "https://api.github.com/users/Enferlain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37907/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37906/comments | https://api.github.com/repos/huggingface/transformers/issues/37906/events | https://github.com/huggingface/transformers/issues/37906 | 3,033,120,001 | I_kwDOCUB6oc60yb0B | 37,906 | Gemma3 doesn't support passing past_key_values | {
"login": "Patchwork53",
"id": 83033987,
"node_id": "MDQ6VXNlcjgzMDMzOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/83033987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Patchwork53",
"html_url": "https://github.com/Patchwork53",
"followers_url": "https://api.github.com/users/Patchwork53/followers",
"following_url": "https://api.github.com/users/Patchwork53/following{/other_user}",
"gists_url": "https://api.github.com/users/Patchwork53/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Patchwork53/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Patchwork53/subscriptions",
"organizations_url": "https://api.github.com/users/Patchwork53/orgs",
"repos_url": "https://api.github.com/users/Patchwork53/repos",
"events_url": "https://api.github.com/users/Patchwork53/events{/privacy}",
"received_events_url": "https://api.github.com/users/Patchwork53/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-05-01T04:13:02 | 2025-05-01T16:47:03 | 2025-05-01T16:47:03 | CONTRIBUTOR | null | null | null | null | ### System Info
OS: Ubuntu 24
Python: 3.11.10
Pytorch: 2.1.2
Transformers: 4.51.3
I tried to write a manual `my_generate()` function to emulate `model.generate()`. However, passing the `past_key_values` argument throws an out-of-bounds error.
I tried the same code with other models (`Qwen2_5_VL`) and those worked.
## Code
```
from transformers import AutoProcessor, Gemma3ForConditionalGeneration, Qwen2_5_VLForConditionalGeneration
import torch
# model_id = "Qwen/Qwen2.5-VL-7B-Instruct"
model_id = "google/gemma-3-12b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id,
device_map="cpu",
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [{"type": "text", "text": "What is the capital of France?"}]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device)
input_ids = inputs["input_ids"]
device = input_ids.device
with torch.inference_mode():
outputs = model.forward(
input_ids=input_ids,
past_key_values=None,
use_cache=True
)
next_token = outputs.logits[:, -1, :].argmax(dim=-1, keepdim=True)
print("1st token:", processor.decode(next_token[0]))
with torch.inference_mode():
outputs = model.forward(
input_ids=next_token,
past_key_values=outputs.past_key_values,
use_cache=True
)
next_token = outputs.logits[:, -1, :].argmax(dim=-1, keepdim=True)
print("2nd token:", processor.decode(next_token[0]))
```
## Trace
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[1], line 44
41 print("1st token:", processor.decode(next_token[0]))
43 with torch.inference_mode():
---> 44 outputs = model.forward(
45 input_ids=next_token,
46 past_key_values=outputs.past_key_values,
47 use_cache=True
48 )
50 next_token = outputs.logits[:, -1, :].argmax(dim=-1, keepdim=True)
51 print("2nd token:", processor.decode(next_token[0]))
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/utils/generic.py:965, in can_return_tuple.<locals>.wrapper(self, *args, **kwargs)
962 set_attribute_for_modules(self, "_is_top_level_module", False)
964 try:
--> 965 output = func(self, *args, **kwargs)
966 if is_requested_to_return_tuple or (is_configured_to_return_tuple and is_top_level_module):
967 output = output.to_tuple()
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/utils/deprecation.py:172, in deprecate_kwarg.<locals>.wrapper.<locals>.wrapped_func(*args, **kwargs)
168 elif minimum_action in (Action.NOTIFY, Action.NOTIFY_ALWAYS) and not is_torchdynamo_compiling():
169 # DeprecationWarning is ignored by default, so we use FutureWarning instead
170 warnings.warn(message, FutureWarning, stacklevel=2)
--> 172 return func(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py:1326, in Gemma3ForConditionalGeneration.forward(self, input_ids, pixel_values, attention_mask, position_ids, past_key_values, token_type_ids, cache_position, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, logits_to_keep, **lm_kwargs)
1321 labels = torch.where(input_ids == self.pad_token_id, self.config.ignore_index, labels)
1323 causal_mask = self._update_causal_mask(
1324 attention_mask, token_type_ids, past_key_values, cache_position, inputs_embeds, is_training
1325 )
-> 1326 outputs: CausalLMOutputWithPast = self.language_model(
1327 attention_mask=causal_mask,
1328 position_ids=position_ids,
1329 past_key_values=past_key_values,
1330 inputs_embeds=inputs_embeds,
1331 use_cache=use_cache,
1332 output_attentions=output_attentions,
1333 output_hidden_states=output_hidden_states,
1334 cache_position=cache_position,
1335 logits_to_keep=logits_to_keep,
1336 **lm_kwargs,
1337 )
1339 logits = outputs.logits
1340 loss = None
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/utils/generic.py:965, in can_return_tuple.<locals>.wrapper(self, *args, **kwargs)
962 set_attribute_for_modules(self, "_is_top_level_module", False)
964 try:
--> 965 output = func(self, *args, **kwargs)
966 if is_requested_to_return_tuple or (is_configured_to_return_tuple and is_top_level_module):
967 output = output.to_tuple()
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/utils/deprecation.py:172, in deprecate_kwarg.<locals>.wrapper.<locals>.wrapped_func(*args, **kwargs)
168 elif minimum_action in (Action.NOTIFY, Action.NOTIFY_ALWAYS) and not is_torchdynamo_compiling():
169 # DeprecationWarning is ignored by default, so we use FutureWarning instead
170 warnings.warn(message, FutureWarning, stacklevel=2)
--> 172 return func(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py:942, in Gemma3ForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, cache_position, logits_to_keep, **loss_kwargs)
938 output_hidden_states = (
939 output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
940 )
941 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
--> 942 outputs: BaseModelOutputWithPast = self.model(
943 input_ids=input_ids,
944 attention_mask=attention_mask,
945 position_ids=position_ids,
946 past_key_values=past_key_values,
947 inputs_embeds=inputs_embeds,
948 use_cache=use_cache,
949 output_attentions=output_attentions,
950 output_hidden_states=output_hidden_states,
951 cache_position=cache_position,
952 **loss_kwargs,
953 )
955 hidden_states = outputs.last_hidden_state
956 # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/utils/generic.py:965, in can_return_tuple.<locals>.wrapper(self, *args, **kwargs)
962 set_attribute_for_modules(self, "_is_top_level_module", False)
964 try:
--> 965 output = func(self, *args, **kwargs)
966 if is_requested_to_return_tuple or (is_configured_to_return_tuple and is_top_level_module):
967 output = output.to_tuple()
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py:722, in Gemma3TextModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, cache_position, last_cache_position, **flash_attn_kwargs)
708 layer_outputs = self._gradient_checkpointing_func(
709 partial(decoder_layer.__call__, **flash_attn_kwargs),
710 hidden_states,
(...)
719 last_cache_position,
720 )
721 else:
--> 722 layer_outputs = decoder_layer(
723 hidden_states,
724 position_embeddings_global=position_embeddings_global,
725 position_embeddings_local=position_embeddings_local,
726 attention_mask=causal_mask,
727 position_ids=position_ids,
728 past_key_value=past_key_values,
729 output_attentions=output_attentions,
730 use_cache=use_cache,
731 cache_position=cache_position,
732 last_cache_position=last_cache_position,
733 **flash_attn_kwargs,
734 )
736 hidden_states = layer_outputs[0]
738 if output_attentions:
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py:420, in Gemma3DecoderLayer.forward(self, hidden_states, position_embeddings_global, position_embeddings_local, attention_mask, position_ids, past_key_value, output_attentions, use_cache, cache_position, last_cache_position, **kwargs)
417 else:
418 position_embeddings = position_embeddings_global
--> 420 hidden_states, self_attn_weights = self.self_attn(
421 hidden_states=hidden_states,
422 position_embeddings=position_embeddings,
423 attention_mask=attention_mask,
424 position_ids=position_ids,
425 past_key_value=past_key_value,
426 output_attentions=output_attentions,
427 use_cache=use_cache,
428 cache_position=cache_position,
429 **kwargs,
430 )
431 hidden_states = self.post_attention_layernorm(hidden_states)
432 hidden_states = residual + hidden_states
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/models/gemma3/modeling_gemma3.py:322, in Gemma3Attention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs)
314 if past_key_value is not None:
315 # sin and cos are specific to RoPE models; cache_position needed for the static cache
316 cache_kwargs = {
317 "sin": sin,
318 "cos": cos,
319 "cache_position": cache_position,
320 "sliding_window": self.sliding_window,
321 }
--> 322 key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
324 # Here we need to slice as we use a static cache by default, but FA2 does not support it
325 if attention_mask is not None and self.config._attn_implementation == "flash_attention_2":
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/cache_utils.py:1782, in HybridCache.update(self, key_states, value_states, layer_idx, cache_kwargs)
1779 else:
1780 update_fn = self._static_update
-> 1782 return update_fn(
1783 cache_position,
1784 layer_idx,
1785 key_states,
1786 value_states,
1787 k_out,
1788 v_out,
1789 k_out.shape[2],
1790 )
File ~/miniconda3/envs/lmm/lib/python3.11/site-packages/transformers/cache_utils.py:1746, in HybridCache._static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len)
1745 def _static_update(self, cache_position, layer_idx, key_states, value_states, k_out, v_out, max_cache_len):
-> 1746 k_out[:, :, cache_position] = key_states
1747 v_out[:, :, cache_position] = value_states
1749 self.key_cache[layer_idx] = k_out
IndexError: index 23 is out of bounds for dimension 0 with size 23
```
`_static_update` tries to savethe KV of the current token position. I believe `k_out` and `v_out` was supposed to be expanded in some previous step before `cache_position` can be accessed.
### Who can help?
@amyeroberts, @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Run the code.
### Expected behavior
Output should be:
```
1st token: The
2nd token: capital
```
with no Errors. | {
"login": "Patchwork53",
"id": 83033987,
"node_id": "MDQ6VXNlcjgzMDMzOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/83033987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Patchwork53",
"html_url": "https://github.com/Patchwork53",
"followers_url": "https://api.github.com/users/Patchwork53/followers",
"following_url": "https://api.github.com/users/Patchwork53/following{/other_user}",
"gists_url": "https://api.github.com/users/Patchwork53/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Patchwork53/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Patchwork53/subscriptions",
"organizations_url": "https://api.github.com/users/Patchwork53/orgs",
"repos_url": "https://api.github.com/users/Patchwork53/repos",
"events_url": "https://api.github.com/users/Patchwork53/events{/privacy}",
"received_events_url": "https://api.github.com/users/Patchwork53/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37906/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37905/comments | https://api.github.com/repos/huggingface/transformers/issues/37905/events | https://github.com/huggingface/transformers/pull/37905 | 3,032,853,777 | PR_kwDOCUB6oc6UlrqZ | 37,905 | Break weight tying when quantizing input embedding | {
"login": "jerryzh168",
"id": 4958441,
"node_id": "MDQ6VXNlcjQ5NTg0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4958441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryzh168",
"html_url": "https://github.com/jerryzh168",
"followers_url": "https://api.github.com/users/jerryzh168/followers",
"following_url": "https://api.github.com/users/jerryzh168/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryzh168/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryzh168/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryzh168/subscriptions",
"organizations_url": "https://api.github.com/users/jerryzh168/orgs",
"repos_url": "https://api.github.com/users/jerryzh168/repos",
"events_url": "https://api.github.com/users/jerryzh168/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryzh168/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-01T00:21:22 | 2025-05-02T08:53:23 | 2025-05-02T08:53:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37905",
"html_url": "https://github.com/huggingface/transformers/pull/37905",
"diff_url": "https://github.com/huggingface/transformers/pull/37905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37905.patch",
"merged_at": "2025-05-02T08:53:23"
} | Summary:
Currently when we try to quantize input_embedding for some models, the output embedding (lm_head) will also be quantized the same way, since they are tied, and this may not be what we want. To break the tie, we added the option to allow people to
1. load unquantized weight
2. tie weights
3. quantize
so that the tie will be broken
Test Plan:
```
from transformers import (
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
TorchAoConfig,
)
from torchao.quantization.quant_api import (
IntxWeightOnlyConfig,
Int8DynamicActivationIntxWeightConfig,
AOPerModuleConfig
)
from torchao.quantization.granularity import PerGroup, PerAxis
import torch
model_id = "microsoft/Phi-4-mini-instruct"
embedding_config = IntxWeightOnlyConfig(
weight_dtype=torch.int8,
granularity=PerAxis(0),
)
linear_config = Int8DynamicActivationIntxWeightConfig(
weight_dtype=torch.int4,
weight_granularity=PerGroup(32),
weight_scale_dtype=torch.bfloat16,
)
quant_config = AOPerModuleConfig({"_default": linear_config, "model.embed_tokens": embedding_config})
quantization_config = TorchAoConfig(quant_type=quant_config, include_embedding=True, untie_embedding_weights=True)
quantized_model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32, device_map="auto", quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
print(quantized_model)
print("embed_tokens.weight:", quantized_model.model.embed_tokens.weight)
print("lm head weight:", quantized_model.lm_head.weight)
from transformers.modeling_utils import find_tied_parameters
print(find_tied_parameters(quantized_model))
# Manual Testing
prompt = "Hey, are you conscious? Can you talk to me?"
messages = [
{
"role": "system",
"content": "",
},
{"role": "user", "content": prompt},
]
templated_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
print("Prompt:", prompt)
print("Templated prompt:", templated_prompt)
inputs = tokenizer(
templated_prompt,
return_tensors="pt",
).to("cuda")
generated_ids = quantized_model.generate(**inputs, max_new_tokens=128)
output_text = tokenizer.batch_decode(
generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print("Response:", output_text[0][len(prompt):])
```
output
```
Phi3ForCausalLM(
(model): Phi3Model(
(embed_tokens): Embedding(200064, 3072, padding_idx=199999)
(layers): ModuleList(
(0-31): 32 x Phi3DecoderLayer(
(self_attn): Phi3Attention(
(o_proj): Linear(in_features=3072, out_features=3072, weight=LinearActivationQuantizedTensor(activation=<function _int8_asymm_per_token_quant at 0x7f3973d8f250>, weight=AffineQuantizedTensor(shape=torch.Size([3072, 3072]), block_size=(1, 32), device=cuda:0, _layout=QDQLayout(), tensor_impl_dtype=torch.int8, quant_min=-8, quant_max=7)))
(qkv_proj): Linear(in_features=3072, out_features=5120, weight=LinearActivationQuantizedTensor(activation=<function _int8_asymm_per_token_quant at 0x7f3973d8f250>, weight=AffineQuantizedTensor(shape=torch.Size([5120, 3072]), block_size=(1, 32), device=cuda:0, _layout=QDQLayout(), tensor_impl_dtype=torch.int8, quant_min=-8, quant_max=7)))
)
(mlp): Phi3MLP(
(gate_up_proj): Linear(in_features=3072, out_features=16384, weight=LinearActivationQuantizedTensor(activation=<function _int8_asymm_per_token_quant at 0x7f3973d8f250>, weight=AffineQuantizedTensor(shape=torch.Size([16384, 3072]), block_size=(1, 32), device=cuda:0, _layout=QDQLayout(), tensor_impl_dtype=torch.int8, quant_min=-8, quant_max=7)))
(down_proj): Linear(in_features=8192, out_features=3072, weight=LinearActivationQuantizedTensor(activation=<function _int8_asymm_per_token_quant at 0x7f3973d8f250>, weight=AffineQuantizedTensor(shape=torch.Size([3072, 8192]), block_size=(1, 32), device=cuda:0, _layout=QDQLayout(), tensor_impl_dtype=torch.int8, quant_min=-8, quant_max=7)))
(activation_fn): SiLU()
)
(input_layernorm): Phi3RMSNorm((3072,), eps=1e-05)
(post_attention_layernorm): Phi3RMSNorm((3072,), eps=1e-05)
(resid_attn_dropout): Dropout(p=0.0, inplace=False)
(resid_mlp_dropout): Dropout(p=0.0, inplace=False)
)
)
(norm): Phi3RMSNorm((3072,), eps=1e-05)
(rotary_emb): Phi3RotaryEmbedding()
)
(lm_head): Linear(in_features=3072, out_features=200064, bias=False)
)
embed_tokens.weight: AffineQuantizedTensor(tensor_impl=QDQTensorImpl(data=tensor([[-20, 4, 13, ..., 8, -5, -3],
[ -2, 1, 13, ..., 0, -18, 15],
[ 1, 2, 11, ..., 15, 0, 18],
...,
[ 0, -2, 7, ..., 4, 10, 12],
[ 0, -2, 7, ..., 4, 10, 12],
[ 0, -2, 7, ..., 4, 10, 12]], device='cuda:0',
dtype=torch.int8)... , scale=tensor([0.0083, 0.0099, 0.0115, ..., 0.0009, 0.0009, 0.0009], device='cuda:0')... , zero_point=tensor([0, 0, 0, ..., 0, 0, 0], device='cuda:0', dtype=torch.int8)... , _layout=QDQLayout()), block_size=(1, 3072), shape=torch.Size([200064, 3072]), device=cuda:0, dtype=torch.float32, requires_grad=False)
lm head weight: Parameter containing:
tensor([[-0.1689, 0.0317, 0.1060, ..., 0.0635, -0.0378, -0.0260],
[-0.0233, 0.0072, 0.1299, ..., 0.0013, -0.1748, 0.1465],
[ 0.0159, 0.0206, 0.1260, ..., 0.1748, -0.0027, 0.2041],
...,
[ 0.0002, -0.0020, 0.0062, ..., 0.0038, 0.0095, 0.0113],
[ 0.0002, -0.0020, 0.0062, ..., 0.0038, 0.0095, 0.0113],
[ 0.0002, -0.0020, 0.0062, ..., 0.0038, 0.0095, 0.0113]],
device='cuda:0')
[]
Prompt: Hey, are you conscious? Can you talk to me?
Templated prompt: <|system|><|end|><|user|>Hey, are you conscious? Can you talk to me?<|end|><|assistant|>
Response: Hello! As an AI, I don't have consciousness in the way humans do, but I'm fully operational and here to assist you. How can I help you today?
```
Reviewers:
Subscribers:
Tasks:
Tags: | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37905/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37905/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37904/comments | https://api.github.com/repos/huggingface/transformers/issues/37904/events | https://github.com/huggingface/transformers/pull/37904 | 3,032,645,662 | PR_kwDOCUB6oc6Uk9q2 | 37,904 | Feat: Add class_proba option to semantic segmentation post-processing | {
"login": "demoncoder-crypto",
"id": 174311533,
"node_id": "U_kgDOCmPIbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174311533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demoncoder-crypto",
"html_url": "https://github.com/demoncoder-crypto",
"followers_url": "https://api.github.com/users/demoncoder-crypto/followers",
"following_url": "https://api.github.com/users/demoncoder-crypto/following{/other_user}",
"gists_url": "https://api.github.com/users/demoncoder-crypto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demoncoder-crypto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demoncoder-crypto/subscriptions",
"organizations_url": "https://api.github.com/users/demoncoder-crypto/orgs",
"repos_url": "https://api.github.com/users/demoncoder-crypto/repos",
"events_url": "https://api.github.com/users/demoncoder-crypto/events{/privacy}",
"received_events_url": "https://api.github.com/users/demoncoder-crypto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-30T22:01:32 | 2025-05-13T15:35:32 | null | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37904",
"html_url": "https://github.com/huggingface/transformers/pull/37904",
"diff_url": "https://github.com/huggingface/transformers/pull/37904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37904.patch",
"merged_at": null
} | Addresses #37715, this is my first implementation, if my direction is correct please do let me know I will iteratively fix everything | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37904/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37903/comments | https://api.github.com/repos/huggingface/transformers/issues/37903/events | https://github.com/huggingface/transformers/pull/37903 | 3,032,590,680 | PR_kwDOCUB6oc6UkxFg | 37,903 | Fix: Optimize safetensors load by moving dtype check for meta device | {
"login": "demoncoder-crypto",
"id": 174311533,
"node_id": "U_kgDOCmPIbQ",
"avatar_url": "https://avatars.githubusercontent.com/u/174311533?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/demoncoder-crypto",
"html_url": "https://github.com/demoncoder-crypto",
"followers_url": "https://api.github.com/users/demoncoder-crypto/followers",
"following_url": "https://api.github.com/users/demoncoder-crypto/following{/other_user}",
"gists_url": "https://api.github.com/users/demoncoder-crypto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/demoncoder-crypto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/demoncoder-crypto/subscriptions",
"organizations_url": "https://api.github.com/users/demoncoder-crypto/orgs",
"repos_url": "https://api.github.com/users/demoncoder-crypto/repos",
"events_url": "https://api.github.com/users/demoncoder-crypto/events{/privacy}",
"received_events_url": "https://api.github.com/users/demoncoder-crypto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T21:40:37 | 2025-05-01T13:46:04 | 2025-05-01T13:46:03 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37903",
"html_url": "https://github.com/huggingface/transformers/pull/37903",
"diff_url": "https://github.com/huggingface/transformers/pull/37903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37903.patch",
"merged_at": null
} | Addresses #37887, This is first implementation if the said path is correct I will iteratively fix the changes requested | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37903/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37902/comments | https://api.github.com/repos/huggingface/transformers/issues/37902/events | https://github.com/huggingface/transformers/pull/37902 | 3,032,528,666 | PR_kwDOCUB6oc6UkjJ7 | 37,902 | Improve performance of `load_state_dict` | {
"login": "woct0rdho",
"id": 23053399,
"node_id": "MDQ6VXNlcjIzMDUzMzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/23053399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woct0rdho",
"html_url": "https://github.com/woct0rdho",
"followers_url": "https://api.github.com/users/woct0rdho/followers",
"following_url": "https://api.github.com/users/woct0rdho/following{/other_user}",
"gists_url": "https://api.github.com/users/woct0rdho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woct0rdho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woct0rdho/subscriptions",
"organizations_url": "https://api.github.com/users/woct0rdho/orgs",
"repos_url": "https://api.github.com/users/woct0rdho/repos",
"events_url": "https://api.github.com/users/woct0rdho/events{/privacy}",
"received_events_url": "https://api.github.com/users/woct0rdho/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T21:11:35 | 2025-05-03T01:28:13 | 2025-05-01T14:35:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37902",
"html_url": "https://github.com/huggingface/transformers/pull/37902",
"diff_url": "https://github.com/huggingface/transformers/pull/37902.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37902.patch",
"merged_at": "2025-05-01T14:35:17"
} | # What does this PR do?
We avoid executing `get_slice` unless `map_location == "meta"` to improve the performance when loading a model with a large number of tensors.
Even though we avoid the dtype check in Python, the dtype will be checked at https://github.com/huggingface/safetensors/blob/7d5af853631628137a79341ddc5611d18a17f3fe/bindings/python/src/lib.rs#L1186
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #37887
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@Rocketknight1 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37902/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37901/comments | https://api.github.com/repos/huggingface/transformers/issues/37901/events | https://github.com/huggingface/transformers/pull/37901 | 3,032,336,502 | PR_kwDOCUB6oc6Uj5NZ | 37,901 | fix-do_sample-default | {
"login": "Lynsoo",
"id": 157243525,
"node_id": "U_kgDOCV9YhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/157243525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lynsoo",
"html_url": "https://github.com/Lynsoo",
"followers_url": "https://api.github.com/users/Lynsoo/followers",
"following_url": "https://api.github.com/users/Lynsoo/following{/other_user}",
"gists_url": "https://api.github.com/users/Lynsoo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lynsoo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lynsoo/subscriptions",
"organizations_url": "https://api.github.com/users/Lynsoo/orgs",
"repos_url": "https://api.github.com/users/Lynsoo/repos",
"events_url": "https://api.github.com/users/Lynsoo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lynsoo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-30T19:49:17 | 2025-05-02T09:23:23 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37901",
"html_url": "https://github.com/huggingface/transformers/pull/37901",
"diff_url": "https://github.com/huggingface/transformers/pull/37901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37901.patch",
"merged_at": null
} | # What does this PR do?
Fixes the do_sample default issue. It should normally be set to `False` as default but it was reported that if it's not explicitly defined, it defaults to `True`.
Fixes #37891
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case : https://github.com/huggingface/transformers/issues/37891
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37901/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37901/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37900/comments | https://api.github.com/repos/huggingface/transformers/issues/37900/events | https://github.com/huggingface/transformers/issues/37900 | 3,032,205,541 | I_kwDOCUB6oc60u8jl | 37,900 | Error in input expansion for `generate` with `num_return_sequences` > 1 for multi-image inputs to `AutoModelForImageTextToText` | {
"login": "saujasv",
"id": 14196644,
"node_id": "MDQ6VXNlcjE0MTk2NjQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/14196644?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saujasv",
"html_url": "https://github.com/saujasv",
"followers_url": "https://api.github.com/users/saujasv/followers",
"following_url": "https://api.github.com/users/saujasv/following{/other_user}",
"gists_url": "https://api.github.com/users/saujasv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saujasv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saujasv/subscriptions",
"organizations_url": "https://api.github.com/users/saujasv/orgs",
"repos_url": "https://api.github.com/users/saujasv/repos",
"events_url": "https://api.github.com/users/saujasv/events{/privacy}",
"received_events_url": "https://api.github.com/users/saujasv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-04-30T18:59:49 | 2025-06-08T08:02:25 | 2025-06-08T08:02:25 | NONE | null | null | null | null | ### System Info
```
- `transformers` version: 4.51.3
- Platform: Linux-5.14.0-427.40.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.7
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'gradient_accumulation_steps': 4, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'cpu', 'zero3_init_flag': False, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.15.1
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA L40S
```
### Who can help?
@zucchini-nlp @amyeroberts @qubvel
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I want to generate multiple responses to the same prompt with an image-text-to-text model. One straightforward way to do this is to use the `generate` function with `num_return_sequences` > 1 in the `GenerationConfig`. However, there appears to be an issue with this. I will use the example of the `google/gemma-3-12b-it` model to present the issue but anecdotally observed this with other models (`mistral-community/pixtral-12b`, `mistralai/Mistral-Small-3.1-24B-Base-2503`, etc.) but not sure which ones, or to what extend the specific model influences this issue.
When using generate with `num_return_sequences` > 1, the inputs are first expanded and then passed to the sample function.
https://github.com/huggingface/transformers/blob/86777b5e2f651d7f7c46db919beb13893743a5b5/src/transformers/generation/utils.py#L2486-L2492
I suspect that the expansion for image inputs when there are multiple images present does not work as expected leading to this error. More details in reproduction/expected behavior.
Here is a code snippet that reproduces the behavior in my setting:
```python
from transformers import AutoModelForImageTextToText, AutoProcessor
gemma_processor = AutoProcessor.from_pretrained(
"google/gemma-3-12b-it", trust_remote_code=True
)
gemma_model = AutoModelForImageTextToText.from_pretrained(
"google/gemma-3-12b-it",
trust_remote_code=True,
attn_implementation="flash_attention_2",
device_map="cuda:1",
torch_dtype="bfloat16",
).eval()
messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "Generate a message referring to one of the images.",
}
],
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "I will show 4 images labelled as A, B, C, D. I will then mention an image. Describe the image corresponding to the label. Your response should only contain the message. Your message does not need to be a full sentence. Your message should be a fluent description.",
},
{"type": "text", "text": "Round 1, "},
{"type": "text", "text": "\nImage A: "},
{
"type": "image",
"url": "http://images.cocodataset.org/val2014/COCO_val2014_000000166401.jpg",
},
{"type": "text", "text": "\nImage B: "},
{
"type": "image",
"url": "http://images.cocodataset.org/val2014/COCO_val2014_000000140076.jpg",
},
{"type": "text", "text": "\nImage C: "},
{
"type": "image",
"url": "http://images.cocodataset.org/val2014/COCO_val2014_000000290477.jpg",
},
{"type": "text", "text": "\nImage D: "},
{
"type": "image",
"url": "http://images.cocodataset.org/val2014/COCO_val2014_000000213224.jpg",
},
{
"type": "text",
"text": "Describe Image B. Generate only a message containing a description.",
},
],
},
]
inputs = gemma_processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
)
output_tokens = gemma_model.generate(
**inputs.to(gemma_model.device, gemma_model.dtype),
do_sample=True,
max_new_tokens=128,
temperature=1.0,
top_p=1.0,
num_return_sequences=8,
tokenizer=gemma_processor.tokenizer,
)
outputs = gemma_processor.batch_decode(
output_tokens[:, inputs.input_ids.shape[1]:], skip_special_tokens=True
)
```
This yields the following list for `outputs`:
```
['A luxurious bathroom space with dark cabinetry, a large countertop sink and mirror, a tiled accent wall, and a soaking tub in the corner.',
'A luxury bathroom with dark cabinetry, a stone countertop, and a large mirror illuminated by a modern light fixture; a potted plant and candles add decorative touches alongside a glimpse of a bathtub and a view out a window.',
'A vibrant patchwork quilt hangs above a dark wood dining table set with red placemats, adorned with a vase of yellow tulips and a figurine.',
'A dining room scene with a patchwork wall hanging, dark leather chairs, a wooden table set with red placemats, and vibrant tulips in a glass vase.',
'A lush, green foliage arrangement bursts from a metallic vase, resting on a vibrant purple cloth atop a wooden altar; flanked by tall candlesticks.',
'A vibrant, leafy arrangement sits in a decorative bronze vase, centered on a purple runner, flanked by tall candlesticks in a church setting.',
'A vibrant arrangement of lilies, carnations, and other blossoms overflowing from a clear glass vase, complemented by smaller vases of red flowers on a wooden table.',
'A vibrant flower arrangement in a clear glass vase, complemented by smaller vases with red blooms, all sitting on a wooden table.']
```
Note how the first two captions are for the first image, the second two for the second image, and so on. This should not be the case, the model is capable of describing the correct image. A description of how this can be determined is under expected behavior.
### Expected behavior
If, instead of asking for 8 completions for 1 prompt, I ask for 1 completion each of 8 copies of the prompt, this issue is fixed.
```python
inputs = gemma_processor.apply_chat_template(
[messages for _ in range(8)],
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
)
output_tokens = gemma_model.generate(
**inputs.to(gemma_model.device, gemma_model.dtype),
do_sample=True,
max_new_tokens=128,
temperature=1.0,
top_p=1.0,
num_return_sequences=1,
tokenizer=gemma_processor.tokenizer,
)
outputs = gemma_processor.batch_decode(
output_tokens[:, inputs.input_ids.shape[1]:], skip_special_tokens=True
)
```
yields the outputs
```
['A vibrant patchwork quilt adorns a stark white wall, centered above a dark wood dining table set with cheerful tulips.',
'A vibrant, patchwork textile hangs on a crisp white wall, complemented by a dark wooden chair and a table set with red placemats and a vase of tulips.',
'A vibrant, patchwork quilt dominates a white wall, framed by a dark wooden chair and table with red accents and a tulip arrangement.',
'A vibrant, intricately patched textile hangs on a white wall, complemented by a wooden chair and a dining table set with tulips and place settings.',
'A vibrant patchwork quilt dominates the wall above a dark wooden table set with black chairs and a vase of tulips.',
'A vibrant patchwork quilt adorns a white wall, centered above a dark wood dining table set with black chairs and a bouquet of tulips.',
'A vibrant patchwork textile hangs on a white wall, complemented by a dark wood dining table set with red placemats and a vase of tulips.',
'A vibrant patchwork wall hanging dominates, framed by a simple white wall, accented by a dark wooden chair and a table set with tulips.']
```
which is expected behavior.
This suggests a bug in how the inputs are expanded for generation. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37900/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37899/comments | https://api.github.com/repos/huggingface/transformers/issues/37899/events | https://github.com/huggingface/transformers/pull/37899 | 3,032,042,810 | PR_kwDOCUB6oc6Ui4_3 | 37,899 | fixed gemma3 collection path pointing to llama 2 collection. | {
"login": "dmgcsilva",
"id": 43959937,
"node_id": "MDQ6VXNlcjQzOTU5OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/43959937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmgcsilva",
"html_url": "https://github.com/dmgcsilva",
"followers_url": "https://api.github.com/users/dmgcsilva/followers",
"following_url": "https://api.github.com/users/dmgcsilva/following{/other_user}",
"gists_url": "https://api.github.com/users/dmgcsilva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmgcsilva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmgcsilva/subscriptions",
"organizations_url": "https://api.github.com/users/dmgcsilva/orgs",
"repos_url": "https://api.github.com/users/dmgcsilva/repos",
"events_url": "https://api.github.com/users/dmgcsilva/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmgcsilva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T17:50:09 | 2025-04-30T19:50:54 | 2025-04-30T19:50:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37899",
"html_url": "https://github.com/huggingface/transformers/pull/37899",
"diff_url": "https://github.com/huggingface/transformers/pull/37899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37899.patch",
"merged_at": "2025-04-30T19:50:54"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Replaces a URL that should point to the Gemma3 model collection, but points to the Llama 2 collection.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37899/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37898/comments | https://api.github.com/repos/huggingface/transformers/issues/37898/events | https://github.com/huggingface/transformers/pull/37898 | 3,032,033,447 | PR_kwDOCUB6oc6Ui28B | 37,898 | Updated Zoedepth model card | {
"login": "miniMaddy",
"id": 77185670,
"node_id": "MDQ6VXNlcjc3MTg1Njcw",
"avatar_url": "https://avatars.githubusercontent.com/u/77185670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miniMaddy",
"html_url": "https://github.com/miniMaddy",
"followers_url": "https://api.github.com/users/miniMaddy/followers",
"following_url": "https://api.github.com/users/miniMaddy/following{/other_user}",
"gists_url": "https://api.github.com/users/miniMaddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miniMaddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miniMaddy/subscriptions",
"organizations_url": "https://api.github.com/users/miniMaddy/orgs",
"repos_url": "https://api.github.com/users/miniMaddy/repos",
"events_url": "https://api.github.com/users/miniMaddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/miniMaddy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T17:45:54 | 2025-05-27T17:06:54 | 2025-05-27T17:06:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37898",
"html_url": "https://github.com/huggingface/transformers/pull/37898",
"diff_url": "https://github.com/huggingface/transformers/pull/37898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37898.patch",
"merged_at": "2025-05-27T17:06:54"
} | # What does this PR do?
Updated zoedepth model card
Fixes #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
- vision models: @amyeroberts
Documentation: @stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37898/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37897/comments | https://api.github.com/repos/huggingface/transformers/issues/37897/events | https://github.com/huggingface/transformers/pull/37897 | 3,031,705,934 | PR_kwDOCUB6oc6UhuTB | 37,897 | Add LlamaForSequenceClassification example to docs | {
"login": "suryaprasanthcse",
"id": 208653497,
"node_id": "U_kgDODG_MuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/208653497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suryaprasanthcse",
"html_url": "https://github.com/suryaprasanthcse",
"followers_url": "https://api.github.com/users/suryaprasanthcse/followers",
"following_url": "https://api.github.com/users/suryaprasanthcse/following{/other_user}",
"gists_url": "https://api.github.com/users/suryaprasanthcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suryaprasanthcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suryaprasanthcse/subscriptions",
"organizations_url": "https://api.github.com/users/suryaprasanthcse/orgs",
"repos_url": "https://api.github.com/users/suryaprasanthcse/repos",
"events_url": "https://api.github.com/users/suryaprasanthcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/suryaprasanthcse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T15:44:46 | 2025-05-03T12:35:29 | 2025-05-03T12:35:28 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37897",
"html_url": "https://github.com/huggingface/transformers/pull/37897",
"diff_url": "https://github.com/huggingface/transformers/pull/37897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37897.patch",
"merged_at": null
} | # What does this PR do?
Ref #36979
Added a minimal working example for `LlamaForSequenceClassification` in the LLaMA documentation.
**Motivation**:
This helps users understand how to use LLaMA for classification tasks, as requested in the issue.
**Changes**:
- Added Python code example showing:
- Model loading
- Tokenization
- Prediction
## Before submitting
- [x] This PR fixes a typo or improves the docs
- [x] Did you read the contributor guideline?
- [x] Did you make sure to update the documentation?
## Who can review?
@ArthurZucker @stevhliu | {
"login": "suryaprasanthcse",
"id": 208653497,
"node_id": "U_kgDODG_MuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/208653497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suryaprasanthcse",
"html_url": "https://github.com/suryaprasanthcse",
"followers_url": "https://api.github.com/users/suryaprasanthcse/followers",
"following_url": "https://api.github.com/users/suryaprasanthcse/following{/other_user}",
"gists_url": "https://api.github.com/users/suryaprasanthcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suryaprasanthcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suryaprasanthcse/subscriptions",
"organizations_url": "https://api.github.com/users/suryaprasanthcse/orgs",
"repos_url": "https://api.github.com/users/suryaprasanthcse/repos",
"events_url": "https://api.github.com/users/suryaprasanthcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/suryaprasanthcse/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37897/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37896/comments | https://api.github.com/repos/huggingface/transformers/issues/37896/events | https://github.com/huggingface/transformers/pull/37896 | 3,031,646,582 | PR_kwDOCUB6oc6UhhQw | 37,896 | [tests] remove overload for deleted test (`test_offloaded_cache_implementation`) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T15:25:12 | 2025-05-27T15:45:18 | 2025-05-27T15:45:15 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37896",
"html_url": "https://github.com/huggingface/transformers/pull/37896",
"diff_url": "https://github.com/huggingface/transformers/pull/37896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37896.patch",
"merged_at": "2025-05-27T15:45:15"
} | # What does this PR do?
`test_offloaded_cache_implementation` was deleted in #37724
This PR deletes stray overloads (which were skipping the test) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37896/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37895/comments | https://api.github.com/repos/huggingface/transformers/issues/37895/events | https://github.com/huggingface/transformers/issues/37895 | 3,031,586,571 | I_kwDOCUB6oc60slcL | 37,895 | How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor? | {
"login": "weiminbai",
"id": 186148186,
"node_id": "U_kgDOCxhlWg",
"avatar_url": "https://avatars.githubusercontent.com/u/186148186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiminbai",
"html_url": "https://github.com/weiminbai",
"followers_url": "https://api.github.com/users/weiminbai/followers",
"following_url": "https://api.github.com/users/weiminbai/following{/other_user}",
"gists_url": "https://api.github.com/users/weiminbai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiminbai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiminbai/subscriptions",
"organizations_url": "https://api.github.com/users/weiminbai/orgs",
"repos_url": "https://api.github.com/users/weiminbai/repos",
"events_url": "https://api.github.com/users/weiminbai/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiminbai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-30T15:06:40 | 2025-05-01T13:36:24 | null | NONE | null | null | null | null | ### Feature request
I'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this process?
### Motivation
I want to backpropagate the gradients of the embeddings output by the Qwen2 image processor to the input image tensor
### Your contribution
I can coporate to fix this issue | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37895/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37894/comments | https://api.github.com/repos/huggingface/transformers/issues/37894/events | https://github.com/huggingface/transformers/pull/37894 | 3,031,529,355 | PR_kwDOCUB6oc6UhHuY | 37,894 | [tests] reset logs in `torch.compile` test | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T14:48:12 | 2025-04-30T15:04:33 | 2025-04-30T15:04:28 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37894",
"html_url": "https://github.com/huggingface/transformers/pull/37894",
"diff_url": "https://github.com/huggingface/transformers/pull/37894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37894.patch",
"merged_at": "2025-04-30T15:04:28"
} | # What does this PR do?
We set special log options in `test_generate_compile_model_forward`, but we don't reset them. This PR ensures we reset them.
(thank you @ydshieh for noticing it! https://github.com/huggingface/transformers/pull/37629#discussion_r2068663475) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37894/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37893/comments | https://api.github.com/repos/huggingface/transformers/issues/37893/events | https://github.com/huggingface/transformers/pull/37893 | 3,031,272,158 | PR_kwDOCUB6oc6UgPYt | 37,893 | Feat: add warnings for unused keys and rules in tensor parallel | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T13:22:14 | 2025-05-16T12:52:48 | 2025-05-16T12:52:47 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37893",
"html_url": "https://github.com/huggingface/transformers/pull/37893",
"diff_url": "https://github.com/huggingface/transformers/pull/37893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37893.patch",
"merged_at": "2025-05-16T12:52:47"
} | # What does this PR do?
Implements extra warnings when tensor parallelism was applied on the model. To be exact it
prints out unused rules in the tp_plan and layers that were not sharded by tp.
First from the serie of PRs making TP more user friendly.
cc @ArthurZucker
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37893/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37892/comments | https://api.github.com/repos/huggingface/transformers/issues/37892/events | https://github.com/huggingface/transformers/pull/37892 | 3,031,248,444 | PR_kwDOCUB6oc6UgKWl | 37,892 | [chat] clean code and add base help | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T13:13:24 | 2025-05-01T14:12:21 | 2025-05-01T14:12:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37892",
"html_url": "https://github.com/huggingface/transformers/pull/37892",
"diff_url": "https://github.com/huggingface/transformers/pull/37892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37892.patch",
"merged_at": "2025-05-01T14:12:18"
} | # What does this PR do?
Cleans `chat.py` before start adding features. There are no functional changes, except for this short help message that is now printed at the start of the chat session.

Changes:
1. Adds a basic help message at the start of the chat session (previously, there was no information on e.g. how to exit the chat session)
2. Adds type hints and docstrings
3. Moves functions into class methods (these are chat-specific functions)
4. 120 char/line limit
5. Other related minor changes | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37892/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37891/comments | https://api.github.com/repos/huggingface/transformers/issues/37891/events | https://github.com/huggingface/transformers/issues/37891 | 3,031,202,896 | I_kwDOCUB6oc60rHxQ | 37,891 | do_sample does not default to False | {
"login": "edmondja",
"id": 11833428,
"node_id": "MDQ6VXNlcjExODMzNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/11833428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edmondja",
"html_url": "https://github.com/edmondja",
"followers_url": "https://api.github.com/users/edmondja/followers",
"following_url": "https://api.github.com/users/edmondja/following{/other_user}",
"gists_url": "https://api.github.com/users/edmondja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edmondja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edmondja/subscriptions",
"organizations_url": "https://api.github.com/users/edmondja/orgs",
"repos_url": "https://api.github.com/users/edmondja/repos",
"events_url": "https://api.github.com/users/edmondja/events{/privacy}",
"received_events_url": "https://api.github.com/users/edmondja/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-30T12:56:02 | 2025-05-02T13:53:33 | 2025-05-01T07:30:07 | NONE | null | null | null | null | ### System Info
My colleague Amine made me notice that if do_sample is not explicitly defined, unlike what the documentation says, it defaults to True.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Here is a script showing do_sample=False must be specified :
```
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3-4b-pt"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"
image = Image.open(requests.get(url, stream=True).raw)
model = Gemma3ForConditionalGeneration.from_pretrained(model_id).eval()
processor = AutoProcessor.from_pretrained(model_id)
prompt = "<start_of_image> in this image, there is"
model_inputs = processor(text=prompt, images=image, return_tensors="pt")
input_len = model_inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**model_inputs, max_new_tokens=100
#, do_sample=False # SPECIFYING IT IS MANDATORY AS THE DOCUMENTATION IS WRONG
)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Expected behavior
Always the same text must be generated as do_sample=False by default according to the documentation -> https://huggingface.co/docs/transformers/main_classes/text_generation | {
"login": "edmondja",
"id": 11833428,
"node_id": "MDQ6VXNlcjExODMzNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/11833428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edmondja",
"html_url": "https://github.com/edmondja",
"followers_url": "https://api.github.com/users/edmondja/followers",
"following_url": "https://api.github.com/users/edmondja/following{/other_user}",
"gists_url": "https://api.github.com/users/edmondja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edmondja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edmondja/subscriptions",
"organizations_url": "https://api.github.com/users/edmondja/orgs",
"repos_url": "https://api.github.com/users/edmondja/repos",
"events_url": "https://api.github.com/users/edmondja/events{/privacy}",
"received_events_url": "https://api.github.com/users/edmondja/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37891/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37890/comments | https://api.github.com/repos/huggingface/transformers/issues/37890/events | https://github.com/huggingface/transformers/pull/37890 | 3,031,182,053 | PR_kwDOCUB6oc6Uf75z | 37,890 | Fix device mismatch by moving num_items_in_batch to loss device in fixed_cross_entropy (#37886) | {
"login": "NEREUScode",
"id": 174478950,
"node_id": "U_kgDOCmZWZg",
"avatar_url": "https://avatars.githubusercontent.com/u/174478950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NEREUScode",
"html_url": "https://github.com/NEREUScode",
"followers_url": "https://api.github.com/users/NEREUScode/followers",
"following_url": "https://api.github.com/users/NEREUScode/following{/other_user}",
"gists_url": "https://api.github.com/users/NEREUScode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NEREUScode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NEREUScode/subscriptions",
"organizations_url": "https://api.github.com/users/NEREUScode/orgs",
"repos_url": "https://api.github.com/users/NEREUScode/repos",
"events_url": "https://api.github.com/users/NEREUScode/events{/privacy}",
"received_events_url": "https://api.github.com/users/NEREUScode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T12:48:06 | 2025-04-30T14:13:17 | 2025-04-30T14:13:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37890",
"html_url": "https://github.com/huggingface/transformers/pull/37890",
"diff_url": "https://github.com/huggingface/transformers/pull/37890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37890.patch",
"merged_at": null
} | Fixes: #37886
This PR ensures that the num_items_in_batch tensor is moved to the same device as the loss tensor before performing division inside the fixed_cross_entropy function. This prevents runtime device mismatch errors when models are trained on non-default devices (e.g., CUDA).
🔧 Changes made:
Updated fixed_cross_entropy to move num_items_in_batch to loss.device before division when reduction is set to 'sum'.
This fix is particularly relevant for ForCausalLMLoss, where num_items_in_batch may be on a different device than logits or loss. | {
"login": "NEREUScode",
"id": 174478950,
"node_id": "U_kgDOCmZWZg",
"avatar_url": "https://avatars.githubusercontent.com/u/174478950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NEREUScode",
"html_url": "https://github.com/NEREUScode",
"followers_url": "https://api.github.com/users/NEREUScode/followers",
"following_url": "https://api.github.com/users/NEREUScode/following{/other_user}",
"gists_url": "https://api.github.com/users/NEREUScode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NEREUScode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NEREUScode/subscriptions",
"organizations_url": "https://api.github.com/users/NEREUScode/orgs",
"repos_url": "https://api.github.com/users/NEREUScode/repos",
"events_url": "https://api.github.com/users/NEREUScode/events{/privacy}",
"received_events_url": "https://api.github.com/users/NEREUScode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37890/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37889/comments | https://api.github.com/repos/huggingface/transformers/issues/37889/events | https://github.com/huggingface/transformers/pull/37889 | 3,031,029,492 | PR_kwDOCUB6oc6UfaUQ | 37,889 | add profiler to trainer | {
"login": "re-imagined",
"id": 11422477,
"node_id": "MDQ6VXNlcjExNDIyNDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/11422477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/re-imagined",
"html_url": "https://github.com/re-imagined",
"followers_url": "https://api.github.com/users/re-imagined/followers",
"following_url": "https://api.github.com/users/re-imagined/following{/other_user}",
"gists_url": "https://api.github.com/users/re-imagined/gists{/gist_id}",
"starred_url": "https://api.github.com/users/re-imagined/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/re-imagined/subscriptions",
"organizations_url": "https://api.github.com/users/re-imagined/orgs",
"repos_url": "https://api.github.com/users/re-imagined/repos",
"events_url": "https://api.github.com/users/re-imagined/events{/privacy}",
"received_events_url": "https://api.github.com/users/re-imagined/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-30T11:44:57 | 2025-07-29T06:36:30 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37889",
"html_url": "https://github.com/huggingface/transformers/pull/37889",
"diff_url": "https://github.com/huggingface/transformers/pull/37889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37889.patch",
"merged_at": null
} | # What does this PR do?
add profiler to trainer
related issue:
https://github.com/huggingface/transformers/issues/36360#issuecomment-2844195069
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37889/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37888/comments | https://api.github.com/repos/huggingface/transformers/issues/37888/events | https://github.com/huggingface/transformers/pull/37888 | 3,030,962,550 | PR_kwDOCUB6oc6UfLm5 | 37,888 | Fix duplicate init self attention in Qwen3 MoE | {
"login": "yzlnew",
"id": 4904877,
"node_id": "MDQ6VXNlcjQ5MDQ4Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4904877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzlnew",
"html_url": "https://github.com/yzlnew",
"followers_url": "https://api.github.com/users/yzlnew/followers",
"following_url": "https://api.github.com/users/yzlnew/following{/other_user}",
"gists_url": "https://api.github.com/users/yzlnew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzlnew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzlnew/subscriptions",
"organizations_url": "https://api.github.com/users/yzlnew/orgs",
"repos_url": "https://api.github.com/users/yzlnew/repos",
"events_url": "https://api.github.com/users/yzlnew/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzlnew/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T11:14:22 | 2025-04-30T12:29:07 | 2025-04-30T12:29:06 | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37888",
"html_url": "https://github.com/huggingface/transformers/pull/37888",
"diff_url": "https://github.com/huggingface/transformers/pull/37888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37888.patch",
"merged_at": null
} | # What does this PR do?
Remove the duplicated lines, maybe due to a manual error.
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37888/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37887/comments | https://api.github.com/repos/huggingface/transformers/issues/37887/events | https://github.com/huggingface/transformers/issues/37887 | 3,030,850,481 | I_kwDOCUB6oc60pxux | 37,887 | Performance of `load_state_dict` with large number of tensors (Qwen3 MoE) | {
"login": "woct0rdho",
"id": 23053399,
"node_id": "MDQ6VXNlcjIzMDUzMzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/23053399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/woct0rdho",
"html_url": "https://github.com/woct0rdho",
"followers_url": "https://api.github.com/users/woct0rdho/followers",
"following_url": "https://api.github.com/users/woct0rdho/following{/other_user}",
"gists_url": "https://api.github.com/users/woct0rdho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/woct0rdho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/woct0rdho/subscriptions",
"organizations_url": "https://api.github.com/users/woct0rdho/orgs",
"repos_url": "https://api.github.com/users/woct0rdho/repos",
"events_url": "https://api.github.com/users/woct0rdho/events{/privacy}",
"received_events_url": "https://api.github.com/users/woct0rdho/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-04-30T10:26:05 | 2025-05-01T14:40:31 | 2025-05-01T14:35:18 | CONTRIBUTOR | null | null | null | null | ### Feature request
It takes a long time to load a model with a large number of tensors. Can we improve its performance?
### Motivation
For example, [Unsloth's Qwen3-30B-A3B](https://huggingface.co/unsloth/Qwen3-30B-A3B-bnb-4bit) has 31337 tensors in a file, because there are 128 experts in each layer, and it takes more than 15 minutes to load it on my computer, which significantly slows down the development around this model.
### Your contribution
Most of the time is spent in this loop: https://github.com/huggingface/transformers/blob/481de7204c6e065616d4f848ea7f69a2287727df/src/transformers/modeling_utils.py#L509
A simple improvement is to move
```python
k_dtype = f.get_slice(k).get_dtype()
if k_dtype in str_to_torch_dtype:
dtype = str_to_torch_dtype[k_dtype]
else:
raise ValueError(f"Cannot load safetensors of unknown dtype {k_dtype}")
```
into `if map_location == "meta":` so it's not executed when loading the weights normally. This reduces the loading time from 15 min to 2 min for me. But loading a dense model of the same size takes only seconds.
Then maybe it boils down to the performance of `get_tensor` but I can't see a simple way to improve it.
Even if the performance cannot be improved, it feels better if we add `logging.tqdm` to this loop. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37887/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37886/comments | https://api.github.com/repos/huggingface/transformers/issues/37886/events | https://github.com/huggingface/transformers/issues/37886 | 3,030,737,676 | I_kwDOCUB6oc60pWMM | 37,886 | num_items_in_batch should be moved to logits.device in ForCausalLMLoss too | {
"login": "situqingyun",
"id": 22349162,
"node_id": "MDQ6VXNlcjIyMzQ5MTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/22349162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/situqingyun",
"html_url": "https://github.com/situqingyun",
"followers_url": "https://api.github.com/users/situqingyun/followers",
"following_url": "https://api.github.com/users/situqingyun/following{/other_user}",
"gists_url": "https://api.github.com/users/situqingyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/situqingyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/situqingyun/subscriptions",
"organizations_url": "https://api.github.com/users/situqingyun/orgs",
"repos_url": "https://api.github.com/users/situqingyun/repos",
"events_url": "https://api.github.com/users/situqingyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/situqingyun/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-30T09:42:16 | 2025-06-02T15:48:36 | 2025-06-02T15:48:36 | NONE | null | null | null | null | ### System Info
fransformers version: 4.51.1
If different layers of the model are placed on multiple GPUs, logtis and num_items_in_batch may be placed on different devices. In ForCausalLMLoss, shift_labes is moved to logits.device, but the same operation is not done for num_items_in_batch, which will lead to the following error:
```
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2245, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 2560, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3736, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/trl/trainer/sft_trainer.py", line 474, in compute_loss
(loss, outputs) = super().compute_loss(
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3801, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 814, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/accelerate/utils/operations.py", line 802, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/accelerate/hooks.py", line 176, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 965, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 843, in forward
loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/loss/loss_utils.py", line 63, in ForCausalLMLoss
loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/raid0/chen/cursor_project/jllm/.venv/lib/python3.12/site-packages/transformers/loss/loss_utils.py", line 37, in fixed_cross_entropy
loss = loss / num_items_in_batch
~~~~~^~~~~~~~~~~~~~~~~~~~
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
example code:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
# 确保使用多卡
os.environ["CUDA_VISIBLE_DEVICES"] = "2,3"
os.environ['http_proxy']='socks5://localhost:9701'
os.environ['https_proxy']='socks5://localhost:9701'
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
TrainingArguments,
Trainer,
DataCollatorForLanguageModeling,
)
import logging
# 设置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# # 自定义Trainer类,解决num_items_in_batch参数问题
# class CustomTrainer(Trainer):
# def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
# # 添加num_items_in_batch参数但不使用它,以兼容新版本Trainer的调用方式
# if "labels" in inputs:
# labels = inputs.pop("labels")
# else:
# labels = None
# outputs = model(**inputs, labels=labels)
# loss = outputs.loss
# return (loss, outputs) if return_outputs else loss
def main():
# 模型和分词器
model_name = "Qwen/Qwen2.5-7B-Instruct"
logger.info(f"正在加载模型 {model_name}...")
tokenizer = AutoTokenizer.from_pretrained(model_name)
# 让模型在初始化时就使用第一个设备,但允许Trainer做DDP处理
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# 配置分词器
tokenizer.pad_token = tokenizer.eos_token
# 加载数据集 (使用较小的数据集示例)
logger.info("正在加载数据集...")
dataset = load_dataset("tatsu-lab/alpaca", split="train[:1000]") # 只使用前1000个样本
# 数据预处理函数
def preprocess_function(examples):
# 将输入和输出拼接在一起
texts = []
for instruction, input_text, output in zip(
examples["instruction"], examples["input"], examples["output"]
):
if input_text:
prompt = f"### Instruction:\n{instruction}\n\n### Input:\n{input_text}\n\n### Response:\n"
else:
prompt = f"### Instruction:\n{instruction}\n\n### Response:\n"
texts.append(prompt + output)
# 对文本进行分词
tokenized_inputs = tokenizer(
texts,
padding="max_length",
truncation=True,
max_length=512,
return_tensors="pt",
)
# 设置labels与input_ids相同用于自回归训练
tokenized_inputs["labels"] = tokenized_inputs["input_ids"].clone()
return tokenized_inputs
logger.info("正在预处理数据...")
tokenized_dataset = dataset.map(
preprocess_function,
batched=True,
remove_columns=dataset.column_names,
)
# 配置训练参数
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=4, # 根据GPU显存调整
gradient_accumulation_steps=8, # 梯度累积
learning_rate=2e-5,
num_train_epochs=1,
logging_dir="./logs",
logging_steps=10,
save_steps=100,
save_total_limit=2,
fp16=True, # 使用混合精度训练
remove_unused_columns=False,
# 正确配置分布式训练
ddp_find_unused_parameters=False,
dataloader_drop_last=True,
)
# 创建数据整理器
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False, # 不使用掩码语言建模
)
# 设置训练器,使用自定义Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset,
data_collator=data_collator,
)
# 开始训练
logger.info("开始训练...")
trainer.train()
# 保存模型
logger.info("保存模型...")
trainer.save_model("./qwen2.5-7b-instruct-finetuned")
tokenizer.save_pretrained("./qwen2.5-7b-instruct-finetuned")
logger.info("训练完成!")
if __name__ == "__main__":
main()
```
### Expected behavior
The model can be trained on multiple GPUs using the trainer. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37886/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37885/comments | https://api.github.com/repos/huggingface/transformers/issues/37885/events | https://github.com/huggingface/transformers/pull/37885 | 3,030,669,857 | PR_kwDOCUB6oc6UeLeu | 37,885 | Trigger CircleCI via GitHub Actions when `ready for review` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T09:19:58 | 2025-05-09T09:45:06 | 2025-05-09T09:45:04 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37885",
"html_url": "https://github.com/huggingface/transformers/pull/37885",
"diff_url": "https://github.com/huggingface/transformers/pull/37885.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37885.patch",
"merged_at": "2025-05-09T09:45:04"
} | # What does this PR do?
So far we are using another approach that will prevent PRs from forked repositories not trigger CircleCI when `ready for review`. This PRs try to use a GitHub Actions workflow with `pull_request_review: ready_for_review` that will trigger CircleCI pipeline. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37885/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37884/comments | https://api.github.com/repos/huggingface/transformers/issues/37884/events | https://github.com/huggingface/transformers/pull/37884 | 3,030,529,606 | PR_kwDOCUB6oc6UdtAV | 37,884 | Get our efficiency back | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-30T08:23:52 | 2025-05-09T13:26:29 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37884",
"html_url": "https://github.com/huggingface/transformers/pull/37884",
"diff_url": "https://github.com/huggingface/transformers/pull/37884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37884.patch",
"merged_at": null
} | jedi is unfortunately extremely slow at parsing the imports such as `from .models import *`; in order to have it be faster, we have no choice but to put the individual objects back in the main init.
These are only necessary for intellisense/static type checkers -> it doesn't impact actual importable objects.
These are maintained with a new `python utils/check_main_init.py` that runs when doing `make fixup` and that updates the main init accordingly. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37884/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37883/comments | https://api.github.com/repos/huggingface/transformers/issues/37883/events | https://github.com/huggingface/transformers/issues/37883 | 3,030,494,500 | I_kwDOCUB6oc60oa0k | 37,883 | ModernBert Tokenizer flag `is_split_into_words` not working | {
"login": "bablf",
"id": 57184353,
"node_id": "MDQ6VXNlcjU3MTg0MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bablf",
"html_url": "https://github.com/bablf",
"followers_url": "https://api.github.com/users/bablf/followers",
"following_url": "https://api.github.com/users/bablf/following{/other_user}",
"gists_url": "https://api.github.com/users/bablf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bablf/subscriptions",
"organizations_url": "https://api.github.com/users/bablf/orgs",
"repos_url": "https://api.github.com/users/bablf/repos",
"events_url": "https://api.github.com/users/bablf/events{/privacy}",
"received_events_url": "https://api.github.com/users/bablf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-30T08:11:16 | 2025-06-08T08:02:27 | 2025-06-08T08:02:27 | NONE | null | null | null | null | ### System Info
Using the Tokenizer for ModernBERT does not work as expected when using the `is_split_into_words` flag. The Tokenizer does not insert spaces, like other tokenizers do.
Minimal sample to reproduce:
`transformers` version: 4.50.1
- Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39
- Python version: 3.10.15
- Huggingface_hub version: 0.26.0
- Safetensors version: 0.4.5
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3080 Ti Laptop GP
### Who can help?
@ArthurZucker @itazap
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer
tk = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
tk.tokenize(["This", "is", "a", "test"], is_split_into_words=True)
```
Output: `['This', 'is', 'a', 'test']` which is missing the spaces
Expected Output: `['This', 'Ġis', 'Ġa', 'Ġtest']`
This leads to different tokenization:
``` tk.encode(["This", "is", "a", "test"], is_split_into_words=True) ```
[50281, 1552, 261, 66, 2566, 50282]
```tk.encode("This is a test")```
[50281, 1552, 310, 247, 1071, 50282]
`tk.tokenize("This is a test")` and `tk.encode("This is a test")` work as expected.
### Expected behavior
see above | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37883/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/37883/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37882/comments | https://api.github.com/repos/huggingface/transformers/issues/37882/events | https://github.com/huggingface/transformers/pull/37882 | 3,030,475,545 | PR_kwDOCUB6oc6UdhKW | 37,882 | make mistral3 pass on xpu | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T08:03:06 | 2025-05-09T07:06:56 | 2025-05-09T06:41:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37882",
"html_url": "https://github.com/huggingface/transformers/pull/37882",
"diff_url": "https://github.com/huggingface/transformers/pull/37882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37882.patch",
"merged_at": "2025-05-09T06:41:12"
} | @ydshieh , pls help review, thx | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37882/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37881/comments | https://api.github.com/repos/huggingface/transformers/issues/37881/events | https://github.com/huggingface/transformers/pull/37881 | 3,030,207,248 | PR_kwDOCUB6oc6UcnH2 | 37,881 | make sure lr is not a tensor | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T05:50:38 | 2025-04-30T12:23:40 | 2025-04-30T12:23:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37881",
"html_url": "https://github.com/huggingface/transformers/pull/37881",
"diff_url": "https://github.com/huggingface/transformers/pull/37881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37881.patch",
"merged_at": "2025-04-30T12:23:40"
} | # What does this PR do?
There seems to be a slight regression where using deepspeed is completely broken and saving checkpoints will fail during the trainer state save step because the LR is still a tensor. This is probably also the root cause that also would fix #37704. The is Tensor check is used in the `else` part of the conditional, but not the `if` clause.
This PR makes sure when we grab the learning rate while using deepspeed that it isn't a Tensor type.
Saving the trainer state fails with:
```
[rank0]: File "/workspace/axolotl/src/axolotl/train.py", line 215, in execute_training
[rank0]: trainer.train(resume_from_checkpoint=resume_from_checkpoint)
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2245, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 2661, in _inner_training_loop
[rank0]: self._maybe_log_save_evaluate(
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 3103, in _maybe_log_save_evaluate
[rank0]: self._save_checkpoint(model, trial)
[rank0]: File "/workspace/axolotl/src/axolotl/core/trainers/base.py", line 612, in _save_checkpoint
[rank0]: return super()._save_checkpoint(model, trial, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer.py", line 3228, in _save_checkpoint
[rank0]: self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/trainer_callback.py", line 146, in save_to_json
[rank0]: json_string = json.dumps(dataclasses.asdict(self), indent=2, sort_keys=True) + "\n"
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/__init__.py", line 238, in dumps
[rank0]: **kw).encode(obj)
[rank0]: ^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 202, in encode
[rank0]: chunks = list(chunks)
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 432, in _iterencode
[rank0]: yield from _iterencode_dict(o, _current_indent_level)
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
[rank0]: yield from chunks
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 326, in _iterencode_list
[rank0]: yield from chunks
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 406, in _iterencode_dict
[rank0]: yield from chunks
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 439, in _iterencode
[rank0]: o = _default(o)
[rank0]: ^^^^^^^^^^^
[rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/json/encoder.py", line 180, in default
[rank0]: raise TypeError(f'Object of type {o.__class__.__name__} '
[rank0]: TypeError: Object of type Tensor is not JSON serializable
```
Here's the deepspeed zero-3 config I'm using, but likely not relevant: https://wandb.ai/axolotl-ai/qwen3-distributed/runs/mg3kc9wv/files/tmp/deepspeed_config_d31f2b33.json
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37881/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37880/comments | https://api.github.com/repos/huggingface/transformers/issues/37880/events | https://github.com/huggingface/transformers/pull/37880 | 3,030,074,670 | PR_kwDOCUB6oc6UcKyi | 37,880 | Fix bugs in DynamicCache | {
"login": "tugsbayasgalan",
"id": 16603271,
"node_id": "MDQ6VXNlcjE2NjAzMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/16603271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tugsbayasgalan",
"html_url": "https://github.com/tugsbayasgalan",
"followers_url": "https://api.github.com/users/tugsbayasgalan/followers",
"following_url": "https://api.github.com/users/tugsbayasgalan/following{/other_user}",
"gists_url": "https://api.github.com/users/tugsbayasgalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tugsbayasgalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tugsbayasgalan/subscriptions",
"organizations_url": "https://api.github.com/users/tugsbayasgalan/orgs",
"repos_url": "https://api.github.com/users/tugsbayasgalan/repos",
"events_url": "https://api.github.com/users/tugsbayasgalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/tugsbayasgalan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T04:14:40 | 2025-06-24T17:43:40 | 2025-06-24T17:43:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37880",
"html_url": "https://github.com/huggingface/transformers/pull/37880",
"diff_url": "https://github.com/huggingface/transformers/pull/37880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37880.patch",
"merged_at": "2025-06-24T17:43:40"
} | # What does this PR do?
When we flatten DynamicCache for export, we never end up flattening the inner tensors of DynamicCache because when we start, there are 0 tensors initialized. As a result, we didn't correctly test the ep.module()(*args, **kwargs) behaviour when we do export when cache is populated.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37880/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37880/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37879/comments | https://api.github.com/repos/huggingface/transformers/issues/37879/events | https://github.com/huggingface/transformers/pull/37879 | 3,029,854,779 | PR_kwDOCUB6oc6UbcIe | 37,879 | Fix qwen2-vl-docs. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.com/users/zhanluxianshen/followers",
"following_url": "https://api.github.com/users/zhanluxianshen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhanluxianshen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhanluxianshen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhanluxianshen/subscriptions",
"organizations_url": "https://api.github.com/users/zhanluxianshen/orgs",
"repos_url": "https://api.github.com/users/zhanluxianshen/repos",
"events_url": "https://api.github.com/users/zhanluxianshen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhanluxianshen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-30T01:02:35 | 2025-04-30T15:31:52 | 2025-04-30T12:32:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37879",
"html_url": "https://github.com/huggingface/transformers/pull/37879",
"diff_url": "https://github.com/huggingface/transformers/pull/37879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37879.patch",
"merged_at": "2025-04-30T12:32:22"
} | null | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37879/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37878/comments | https://api.github.com/repos/huggingface/transformers/issues/37878/events | https://github.com/huggingface/transformers/pull/37878 | 3,029,747,417 | PR_kwDOCUB6oc6UbE1b | 37,878 | PerceptionLM | {
"login": "shuminghu",
"id": 2934295,
"node_id": "MDQ6VXNlcjI5MzQyOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2934295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuminghu",
"html_url": "https://github.com/shuminghu",
"followers_url": "https://api.github.com/users/shuminghu/followers",
"following_url": "https://api.github.com/users/shuminghu/following{/other_user}",
"gists_url": "https://api.github.com/users/shuminghu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shuminghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shuminghu/subscriptions",
"organizations_url": "https://api.github.com/users/shuminghu/orgs",
"repos_url": "https://api.github.com/users/shuminghu/repos",
"events_url": "https://api.github.com/users/shuminghu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shuminghu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-04-29T23:39:54 | 2025-07-11T09:07:33 | 2025-07-11T09:07:33 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37878",
"html_url": "https://github.com/huggingface/transformers/pull/37878",
"diff_url": "https://github.com/huggingface/transformers/pull/37878.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37878.patch",
"merged_at": "2025-07-11T09:07:33"
} | This PR implements PerceptionLM released by Meta:
https://github.com/facebookresearch/perception_models
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37878/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37878/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37877/comments | https://api.github.com/repos/huggingface/transformers/issues/37877/events | https://github.com/huggingface/transformers/pull/37877 | 3,029,652,294 | PR_kwDOCUB6oc6UawZU | 37,877 | parallelism goes brrr | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 2760822153,
"node_id": "MDU6TGFiZWwyNzYwODIyMTUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tensor%20Parallel",
"name": "Tensor Parallel",
"color": "1AD0A8",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-04-29T22:22:27 | 2025-05-21T14:29:35 | 2025-05-20T14:22:52 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37877",
"html_url": "https://github.com/huggingface/transformers/pull/37877",
"diff_url": "https://github.com/huggingface/transformers/pull/37877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37877.patch",
"merged_at": "2025-05-20T14:22:52"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37877/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37877/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37876/comments | https://api.github.com/repos/huggingface/transformers/issues/37876/events | https://github.com/huggingface/transformers/issues/37876 | 3,029,475,207 | I_kwDOCUB6oc60kh-H | 37,876 | Qwen model export regression | {
"login": "guangy10",
"id": 42389959,
"node_id": "MDQ6VXNlcjQyMzg5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/42389959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guangy10",
"html_url": "https://github.com/guangy10",
"followers_url": "https://api.github.com/users/guangy10/followers",
"following_url": "https://api.github.com/users/guangy10/following{/other_user}",
"gists_url": "https://api.github.com/users/guangy10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guangy10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guangy10/subscriptions",
"organizations_url": "https://api.github.com/users/guangy10/orgs",
"repos_url": "https://api.github.com/users/guangy10/repos",
"events_url": "https://api.github.com/users/guangy10/events{/privacy}",
"received_events_url": "https://api.github.com/users/guangy10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T20:32:09 | 2025-05-07T07:13:09 | 2025-05-07T07:13:09 | CONTRIBUTOR | null | null | null | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.52.0.dev0
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.11.11
- Huggingface_hub version: 0.30.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.8.0.dev20250325 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Qwen model (including Qwen2 and Qwen3) will fail to export on latest trunk. This seems to be a regression since latest release.
I've also verified that `transformers==4.51.3` those models are export fine.
I've also verified that same the tests can run and pass on other models, e.g. llama, gemma, etc. So it is Qwen spefici.
How to reproduce?
`RUN_SLOW=1 pytest tests/models/qwen2/test_modeling_qwen2.py -v -s -k test_export`
The failure and stacktrace:
```
FAILED tests/models/qwen2/test_modeling_qwen2.py::Qwen2IntegrationTest::test_export_static_cache - torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder builtins.method
from user code:
File "/Users/guangyang/transformers/src/transformers/integrations/executorch.py", line 312, in forward
outs = self.model(
File "/Users/guangyang/miniconda3/envs/executorch/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/guangyang/transformers/src/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/Users/guangyang/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 823, in forward
outputs: BaseModelOutputWithPast = self.model(
File "/Users/guangyang/miniconda3/envs/executorch/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/guangyang/transformers/src/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/Users/guangyang/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 531, in forward
causal_mask = self._update_causal_mask(
File "/Users/guangyang/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 640, in _update_causal_mask
causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
File "/Users/guangyang/transformers/src/transformers/models/qwen2/modeling_qwen2.py", line 708, in _prepare_4d_causal_attention_mask_with_cache_position
if config.get_text_config().sliding_window is not None:
File "/Users/guangyang/transformers/src/transformers/configuration_utils.py", line 211, in __getattribute__
return super().__getattribute__(key)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Expected behavior
The test should pass | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37876/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37875/comments | https://api.github.com/repos/huggingface/transformers/issues/37875/events | https://github.com/huggingface/transformers/pull/37875 | 3,029,391,629 | PR_kwDOCUB6oc6UZ3Dl | 37,875 | Add DEIM object detection model | {
"login": "sushmanthreddy",
"id": 73489688,
"node_id": "MDQ6VXNlcjczNDg5Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/73489688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sushmanthreddy",
"html_url": "https://github.com/sushmanthreddy",
"followers_url": "https://api.github.com/users/sushmanthreddy/followers",
"following_url": "https://api.github.com/users/sushmanthreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/sushmanthreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sushmanthreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sushmanthreddy/subscriptions",
"organizations_url": "https://api.github.com/users/sushmanthreddy/orgs",
"repos_url": "https://api.github.com/users/sushmanthreddy/repos",
"events_url": "https://api.github.com/users/sushmanthreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sushmanthreddy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-04-29T19:57:05 | 2025-09-01T16:19:50 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37875",
"html_url": "https://github.com/huggingface/transformers/pull/37875",
"diff_url": "https://github.com/huggingface/transformers/pull/37875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37875.patch",
"merged_at": null
} | close #36204
https://github.com/ShihuaHuang95/DEIM/tree/main
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37875/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37875/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37874/comments | https://api.github.com/repos/huggingface/transformers/issues/37874/events | https://github.com/huggingface/transformers/issues/37874 | 3,029,237,898 | I_kwDOCUB6oc60joCK | 37,874 | Speech2TextForConditionalGeneration broken in transformers 4.51.x | {
"login": "aaron-siegel",
"id": 2014957,
"node_id": "MDQ6VXNlcjIwMTQ5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2014957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaron-siegel",
"html_url": "https://github.com/aaron-siegel",
"followers_url": "https://api.github.com/users/aaron-siegel/followers",
"following_url": "https://api.github.com/users/aaron-siegel/following{/other_user}",
"gists_url": "https://api.github.com/users/aaron-siegel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaron-siegel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaron-siegel/subscriptions",
"organizations_url": "https://api.github.com/users/aaron-siegel/orgs",
"repos_url": "https://api.github.com/users/aaron-siegel/repos",
"events_url": "https://api.github.com/users/aaron-siegel/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaron-siegel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T18:55:13 | 2025-06-17T17:53:18 | 2025-05-06T13:49:01 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.51.3
- Platform: macOS-15.3.1-arm64-arm-64bit
- Python version: 3.12.9
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Running the Speech2Text example on transformers 4.51.x gives either nonsense output or no output. The code I'm running is taken verbatim from https://huggingface.co/docs/transformers/en/model_doc/speech_to_text
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription
```
On transformers 4.50.3 it gives the expected output:
```python
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
```
On transformers 4.51.x it gives either no output or nonsense output:
With Python 3.12 & transformers 4.51.3:
```python
['that man man man man man man man man man man man man turn turn turn turn turn turn turn turn turn thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin']
```
With Python 3.9 & transformers 4.51.3:
```python
['']
```
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`conda create --name temp python=3.12`
`conda activate temp`
`pip install torch torchaudio soundfile librosa datasets transformers sentencepiece`
```python
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription
```
### Expected behavior
```python
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
``` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37874/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37873/comments | https://api.github.com/repos/huggingface/transformers/issues/37873/events | https://github.com/huggingface/transformers/pull/37873 | 3,029,158,867 | PR_kwDOCUB6oc6UZFVA | 37,873 | [tests] Test all cache implementations | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T18:18:41 | 2025-05-09T14:54:15 | 2025-04-30T14:37:00 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37873",
"html_url": "https://github.com/huggingface/transformers/pull/37873",
"diff_url": "https://github.com/huggingface/transformers/pull/37873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37873.patch",
"merged_at": "2025-04-30T14:37:00"
} | # What does this PR do?
The main purpose of this PR is to convert a few slow tests targeted at one cache implementation into fast tests that run on ALL cache implementations.
Secondarily, makes `RUN_SLOW=1 py.test tests/utils/test_cache_utils.py` green 🟢 These tests also become much, much faster (3 mins -> 1 min, on my machine), despite covering a larger number of features.
This is a follow up to https://github.com/huggingface/transformers/pull/37684, which paved the way for this PR. After this PR is merged, I can go back to https://github.com/huggingface/transformers/pull/37394 and properly test things!
👉 torch.compile was benchmarked with gemma2/hybrid and qwen3/static, no speed regressions.
👉 no regressions in `RUN_SLOW=1 py.test tests/models/llama/test_modeling_llama.py` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37873/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37873/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37872/comments | https://api.github.com/repos/huggingface/transformers/issues/37872/events | https://github.com/huggingface/transformers/pull/37872 | 3,029,119,406 | PR_kwDOCUB6oc6UY8tu | 37,872 | Llama Guard updates | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T18:00:43 | 2025-05-20T14:12:58 | 2025-04-30T08:34:44 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37872",
"html_url": "https://github.com/huggingface/transformers/pull/37872",
"diff_url": "https://github.com/huggingface/transformers/pull/37872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37872.patch",
"merged_at": "2025-04-30T08:34:44"
} | Reviewed by @molbap privately.
#37852 should be merged too. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37872/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37872/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37871/comments | https://api.github.com/repos/huggingface/transformers/issues/37871/events | https://github.com/huggingface/transformers/pull/37871 | 3,028,972,893 | PR_kwDOCUB6oc6UYdS7 | 37,871 | Fix Qwen3 tp plan with FP8 | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T16:51:49 | 2025-04-30T16:14:12 | 2025-04-30T16:14:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37871",
"html_url": "https://github.com/huggingface/transformers/pull/37871",
"diff_url": "https://github.com/huggingface/transformers/pull/37871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37871.patch",
"merged_at": "2025-04-30T16:14:10"
} | # What does this PR do?
This PR :
- Enables the device context for the FP8 activation kernel (if disabled, it causes a triton memory error)
- Updates the tp plan for the qwen3 fp8 models (since they use finegrained fp8, the scales will follow the sharding strategy of the weights) | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37871/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37871/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37870/comments | https://api.github.com/repos/huggingface/transformers/issues/37870/events | https://github.com/huggingface/transformers/pull/37870 | 3,028,949,258 | PR_kwDOCUB6oc6UYYKw | 37,870 | Bump transformers from 4.48.0 to 4.50.0 in /examples/tensorflow/language-modeling-tpu | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | open | false | null | [] | null | [] | 2025-04-29T16:41:26 | 2025-06-05T12:38:06 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37870",
"html_url": "https://github.com/huggingface/transformers/pull/37870",
"diff_url": "https://github.com/huggingface/transformers/pull/37870.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37870.patch",
"merged_at": null
} | Bumps [transformers](https://github.com/huggingface/transformers) from 4.48.0 to 4.50.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/huggingface/transformers/releases">transformers's releases</a>.</em></p>
<blockquote>
<h1>Release v4.50.0</h1>
<h2>New Model Additions</h2>
<h3>Model-based releases</h3>
<p>Starting with version v4.49.0, we have been doing model-based releases, additionally to our traditional, software-based monthly releases. These model-based releases provide a tag from which models may be installed.</p>
<p>Contrarily to our software-releases; these are not pushed to pypi and are kept on our GitHub. Each release has a tag attributed to it, such as:</p>
<ul>
<li><code>v4.49.0-Gemma-3</code></li>
<li><code>v4.49.0-AyaVision</code></li>
</ul>
<p>⚠️ As bugs are identified and fixed on each model, the release tags are updated so that installing from that tag always gives the best experience possible with that model.</p>
<p>Each new model release will always be based on the current state of the main branch at the time of its creation. This ensures that new models start with the latest features and fixes available.</p>
<p>For example, if two models—Gemma-3 and AyaVision—are released from main, and then a fix for gemma3 is merged, it will look something like this:</p>
<pre><code> o---- v4.49.0-Gemma-3 (includes AyaVision, plus main fixes)
/ \
---o--o--o--o--o-- (fix for gemma3) --o--o--o main
\
o---- v4.49.0-AyaVision
</code></pre>
<p>We strive to merge model specific fixes on their respective branches as fast as possible!</p>
<h3>Gemma 3</h3>
<p><img src="https://github.com/user-attachments/assets/2b7f31b3-02bd-496a-9d4e-a1867bd6d9d4" alt="image" /></p>
<p>Gemma 3 is heavily referenced in the following <a href="https://github.com/huggingface/transformers/releases/tag/v4.49.0-Gemma-3">model-based release</a> and we recommend reading these if you want all the information relative to that model.</p>
<p>The Gemma 3 model was proposed by Google. It is a vision-language model composed by a <a href="https://huggingface.co/docs/transformers/model_doc/siglip">SigLIP</a> vision encoder and a <a href="https://huggingface.co/docs/transformers/model_doc/gemma_2">Gemma 2</a> language decoder linked by a multimodal linear projection.</p>
<p>It cuts an image into a fixed number of tokens same way as Siglip if the image does not exceed certain aspect ratio. For images that exceed the given aspect ratio, it crops the image into multiple smaller pacthes and concatenates them with the base image embedding.</p>
<p>One particularity is that the model uses bidirectional attention on all the image tokens. Also, the model interleaves sliding window local attention with full causal attention in the language backbone, where each sixth layer is a full causal attention layer.</p>
<ul>
<li>Gemma3 by <a href="https://github.com/RyanMullins"><code>@RyanMullins</code></a> in <a href="https://redirect.github.com/huggingface/transformers/issues/36658">#36658</a></li>
</ul>
<h3>Shield Gemma2</h3>
<p>ShieldGemma 2 is built on <a href="https://ai.google.dev/gemma/docs/core/model_card_3">Gemma 3</a>, is a 4 billion (4B) parameter model that checks the safety of both synthetic and natural images against key categories to help you build robust datasets and models. With this addition to the Gemma family of models, researchers and developers can now easily minimize the risk of harmful content in their models across key areas of harm as defined below:</p>
<ul>
<li>No Sexually Explicit content: The image shall not contain content that depicts explicit or graphic sexual acts (e.g., pornography, erotic nudity, depictions of rape or sexual assault).</li>
<li>No Dangerous Content: The image shall not contain content that facilitates or encourages activities that could cause real-world harm (e.g., building firearms and explosive devices, promotion of terrorism, instructions for suicide).</li>
<li>No Violence/Gore content: The image shall not contain content that depicts shocking, sensational, or gratuitous violence (e.g., excessive blood and gore, gratuitous violence against animals, extreme injury or moment of death).</li>
</ul>
<p>We recommend using ShieldGemma 2 as an input filter to vision language models, or as an output filter of image generation systems. To train a robust image safety model, we curated training datasets of natural and synthetic images and instruction-tuned Gemma 3 to demonstrate strong performance.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/huggingface/transformers/commit/0b057e66b52556da3a1cbc29e2a98c0784ea9c33"><code>0b057e6</code></a> fix import issue</li>
<li><a href="https://github.com/huggingface/transformers/commit/26fbd6919af810bf508eaea8b9eb9dcee829e228"><code>26fbd69</code></a> v 4.50.0</li>
<li><a href="https://github.com/huggingface/transformers/commit/523f6e743c74ecea90d0c37a172c9819b5691a19"><code>523f6e7</code></a> Fix: dtype cannot be str (<a href="https://redirect.github.com/huggingface/transformers/issues/36262">#36262</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/3f9ff19b4ec7dcf4112225079f26ea756aafd211"><code>3f9ff19</code></a> Minor Gemma 3 fixes (<a href="https://redirect.github.com/huggingface/transformers/issues/36884">#36884</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/f94b0c59f20447c0e6bdb6d381ea014fa47ecac8"><code>f94b0c5</code></a> Use <code>deformable_detr</code> kernel from the Hub (<a href="https://redirect.github.com/huggingface/transformers/issues/36853">#36853</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/2638d54e7851f1323dc78a8b513b041835aba27b"><code>2638d54</code></a> Gemma 3 tests expect greedy decoding (<a href="https://redirect.github.com/huggingface/transformers/issues/36882">#36882</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/b8aadc31d56e49d8b9075e73e5c433f7c5b4e04b"><code>b8aadc3</code></a> :red_circle: :red_circle: :red_circle: supersede paligemma forward to shift p...</li>
<li><a href="https://github.com/huggingface/transformers/commit/6321876b5bac106d7e7c84b53418ea31fe1d9754"><code>6321876</code></a> add eustlb as an actor</li>
<li><a href="https://github.com/huggingface/transformers/commit/94f487626a296deac0022dda6462c0d9f2336106"><code>94f4876</code></a> [generate] model defaults being inherited only happens for newer models (<a href="https://redirect.github.com/huggingface/transformers/issues/36881">#36881</a>)</li>
<li><a href="https://github.com/huggingface/transformers/commit/f19d018bfff1613ba05dcbf7e82c461d49aee73e"><code>f19d018</code></a> Revert "Update deprecated Jax calls (<a href="https://redirect.github.com/huggingface/transformers/issues/35919">#35919</a>)" (<a href="https://redirect.github.com/huggingface/transformers/issues/36880">#36880</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/huggingface/transformers/compare/v4.48.0...v4.50.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has been open for over 30 days.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37870/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37869/comments | https://api.github.com/repos/huggingface/transformers/issues/37869/events | https://github.com/huggingface/transformers/pull/37869 | 3,028,782,641 | PR_kwDOCUB6oc6UX0R8 | 37,869 | Hybrid cache v2 | {
"login": "tugsbayasgalan",
"id": 16603271,
"node_id": "MDQ6VXNlcjE2NjAzMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/16603271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tugsbayasgalan",
"html_url": "https://github.com/tugsbayasgalan",
"followers_url": "https://api.github.com/users/tugsbayasgalan/followers",
"following_url": "https://api.github.com/users/tugsbayasgalan/following{/other_user}",
"gists_url": "https://api.github.com/users/tugsbayasgalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tugsbayasgalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tugsbayasgalan/subscriptions",
"organizations_url": "https://api.github.com/users/tugsbayasgalan/orgs",
"repos_url": "https://api.github.com/users/tugsbayasgalan/repos",
"events_url": "https://api.github.com/users/tugsbayasgalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/tugsbayasgalan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-29T15:34:52 | 2025-05-08T14:41:07 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37869",
"html_url": "https://github.com/huggingface/transformers/pull/37869",
"diff_url": "https://github.com/huggingface/transformers/pull/37869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37869.patch",
"merged_at": null
} | Reapply of https://github.com/huggingface/transformers/pull/37623 .
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37869/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37868/comments | https://api.github.com/repos/huggingface/transformers/issues/37868/events | https://github.com/huggingface/transformers/pull/37868 | 3,028,667,946 | PR_kwDOCUB6oc6UXbIV | 37,868 | Add xcodec2 model | {
"login": "Deep-unlearning",
"id": 58599908,
"node_id": "MDQ6VXNlcjU4NTk5OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/58599908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Deep-unlearning",
"html_url": "https://github.com/Deep-unlearning",
"followers_url": "https://api.github.com/users/Deep-unlearning/followers",
"following_url": "https://api.github.com/users/Deep-unlearning/following{/other_user}",
"gists_url": "https://api.github.com/users/Deep-unlearning/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Deep-unlearning/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Deep-unlearning/subscriptions",
"organizations_url": "https://api.github.com/users/Deep-unlearning/orgs",
"repos_url": "https://api.github.com/users/Deep-unlearning/repos",
"events_url": "https://api.github.com/users/Deep-unlearning/events{/privacy}",
"received_events_url": "https://api.github.com/users/Deep-unlearning/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-04-29T14:58:31 | 2025-10-21T01:11:37 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37868",
"html_url": "https://github.com/huggingface/transformers/pull/37868",
"diff_url": "https://github.com/huggingface/transformers/pull/37868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37868.patch",
"merged_at": null
} | # What does this PR do?
This PR adds support for [XCodec2](https://github.com/zhenye234/X-Codec-2.0) a high fidelity general neural audio codec used in [Llasa](https://huggingface.co/collections/HKUSTAudio/llasa-679b87dbd06ac556cc0e0f44) a Text-to-Speech model, to the Transformers library.
This model is composed of 5 components:
- A Semantic Encoder
- An Acoustic Encoder
- A VectorQuantizer
- A Semantic Decoder
- An Acoustic Decoder
This is still a draft PR. Work done so far:
- Adapted the model to Transformers format in `modeling_xcodec2.py` and `modular_xcodec2.py`.
## Todo
- [x] Add the checkpoint conversion scripts and push to the hub
- [x] Support batch inference
- [x] Write Tests
- [x] Add documentation
## Who can review?
cc: @ArthurZucker
cc: @eustlb @Vaibhavs10 for visibility
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37868/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37867/comments | https://api.github.com/repos/huggingface/transformers/issues/37867/events | https://github.com/huggingface/transformers/issues/37867 | 3,028,621,723 | I_kwDOCUB6oc60hRmb | 37,867 | Option for save_pretrained() to Export Model Source Code Files | {
"login": "WhenMelancholy",
"id": 21274779,
"node_id": "MDQ6VXNlcjIxMjc0Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/21274779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WhenMelancholy",
"html_url": "https://github.com/WhenMelancholy",
"followers_url": "https://api.github.com/users/WhenMelancholy/followers",
"following_url": "https://api.github.com/users/WhenMelancholy/following{/other_user}",
"gists_url": "https://api.github.com/users/WhenMelancholy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WhenMelancholy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WhenMelancholy/subscriptions",
"organizations_url": "https://api.github.com/users/WhenMelancholy/orgs",
"repos_url": "https://api.github.com/users/WhenMelancholy/repos",
"events_url": "https://api.github.com/users/WhenMelancholy/events{/privacy}",
"received_events_url": "https://api.github.com/users/WhenMelancholy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-04-29T14:42:46 | 2025-04-30T11:40:40 | null | NONE | null | null | null | null | ### Feature request
Add an optional flag (e.g., include_source_code=True) to PreTrainedModel.save_pretrained() / PreTrainedTokenizer.save_pretrained() that copies the currently-used Python implementation files (e.g., modeling_xxx.py, configuration_xxx.py, tokenization_xxx.py) into the target directory—even when those files already exist in the Transformers package.
### Motivation
- **Editable snapshots**
When iterating on research ideas, I often need to tweak the model’s forward pass or config class after fine-tuning. Editing code inside site-packages/transformers is brittle (breaks with upgrades, affects unrelated projects, requires virtual-env gymnastics).
- **Reproducibility & archiving**
Archiving both weights and exact code in a single artifact dramatically simplifies sharing and long-term reproducibility, especially when the upstream library may evolve.
### Your contribution
I’m not very familiar with the internal mechanism Transformers uses to store model files; if the feature is feasible, I’m willing to submit a PR myself later. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37867/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37866/comments | https://api.github.com/repos/huggingface/transformers/issues/37866/events | https://github.com/huggingface/transformers/pull/37866 | 3,028,568,628 | PR_kwDOCUB6oc6UXFze | 37,866 | 🚨🚨[core] Completely rewrite the masking logic for all attentions | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T14:26:21 | 2025-06-10T18:01:39 | 2025-05-22T09:38:26 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37866",
"html_url": "https://github.com/huggingface/transformers/pull/37866",
"diff_url": "https://github.com/huggingface/transformers/pull/37866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37866.patch",
"merged_at": "2025-05-22T09:38:26"
} | # What does this PR do?
As per the title. The goal is to properly separate masking logic from modeling code itself, to continue our objective of simplifying the library.
- Code is much simpler to understand
- Much more general: always work for all lengths, and all attention implementations, e.g.:
- flex attention now works with sliding/hybrid models (not the case before)
- FA2 now works with static caches (including models with default hybrid structures) (was only the case for hybrid models before)
- All models can use all Cache classes (e.g. models with Hybrid structure can default back to use DynamicCache)
- Extremely scalable in the future: any pattern of layers can be taken into account WITHOUT ANY CHANGE to modeling or masking. A new masking pattern (e.g. the recently introduced chunked attention for Llama4) can be added with minimal efforts (just add a new mask_mod to describe it, and voila!)
- A single truth: mask creation was copied over and over again, but sometimes with slight changes to account for sliding windows or similar. This would eventually lead to mistakes or inefficiencies as things would be "forced to fit", and a lot of maintenance burden
- compile compatible: the new mask creation is technically compile compatible - it should however stay outside what is compiled in the forward to avoid recompilations as it's being done in `generate`
- Allow external mask creation: In case someone passes their custom attention implementation, they may need their own mask creation function, which is now supported
- TGI/vLLM backend should be even more efficient now, as we don't waste compute on creating a useless mask (would previously create a 4d mask as for sdpa, which would not be used) | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37866/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 8,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37866/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37865/comments | https://api.github.com/repos/huggingface/transformers/issues/37865/events | https://github.com/huggingface/transformers/pull/37865 | 3,028,427,724 | PR_kwDOCUB6oc6UWmy9 | 37,865 | update Clean_up_tokenization_spaces typos. | {
"login": "zhanluxianshen",
"id": 161462588,
"node_id": "U_kgDOCZ-5PA",
"avatar_url": "https://avatars.githubusercontent.com/u/161462588?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhanluxianshen",
"html_url": "https://github.com/zhanluxianshen",
"followers_url": "https://api.github.com/users/zhanluxianshen/followers",
"following_url": "https://api.github.com/users/zhanluxianshen/following{/other_user}",
"gists_url": "https://api.github.com/users/zhanluxianshen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhanluxianshen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhanluxianshen/subscriptions",
"organizations_url": "https://api.github.com/users/zhanluxianshen/orgs",
"repos_url": "https://api.github.com/users/zhanluxianshen/repos",
"events_url": "https://api.github.com/users/zhanluxianshen/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhanluxianshen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T13:47:12 | 2025-04-30T15:32:38 | 2025-04-30T12:04:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37865",
"html_url": "https://github.com/huggingface/transformers/pull/37865",
"diff_url": "https://github.com/huggingface/transformers/pull/37865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37865.patch",
"merged_at": "2025-04-30T12:04:49"
} | null | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37865/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37864/comments | https://api.github.com/repos/huggingface/transformers/issues/37864/events | https://github.com/huggingface/transformers/pull/37864 | 3,028,384,528 | PR_kwDOCUB6oc6UWdFI | 37,864 | update comment in image_processing_base.py to reference image_process… | {
"login": "arjunaskykok",
"id": 32124593,
"node_id": "MDQ6VXNlcjMyMTI0NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/32124593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunaskykok",
"html_url": "https://github.com/arjunaskykok",
"followers_url": "https://api.github.com/users/arjunaskykok/followers",
"following_url": "https://api.github.com/users/arjunaskykok/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunaskykok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunaskykok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunaskykok/subscriptions",
"organizations_url": "https://api.github.com/users/arjunaskykok/orgs",
"repos_url": "https://api.github.com/users/arjunaskykok/repos",
"events_url": "https://api.github.com/users/arjunaskykok/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunaskykok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T13:35:36 | 2025-04-30T13:31:30 | 2025-04-30T13:31:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37864",
"html_url": "https://github.com/huggingface/transformers/pull/37864",
"diff_url": "https://github.com/huggingface/transformers/pull/37864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37864.patch",
"merged_at": "2025-04-30T13:31:29"
} | …ing_utils_fast
Fixes #37815
## Who can review?
@Rocketknight1
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37864/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37863/comments | https://api.github.com/repos/huggingface/transformers/issues/37863/events | https://github.com/huggingface/transformers/pull/37863 | 3,028,376,766 | PR_kwDOCUB6oc6UWbVZ | 37,863 | Update Model Card for Mamba | {
"login": "ParagEkbote",
"id": 69567729,
"node_id": "MDQ6VXNlcjY5NTY3NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/69567729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ParagEkbote",
"html_url": "https://github.com/ParagEkbote",
"followers_url": "https://api.github.com/users/ParagEkbote/followers",
"following_url": "https://api.github.com/users/ParagEkbote/following{/other_user}",
"gists_url": "https://api.github.com/users/ParagEkbote/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ParagEkbote/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ParagEkbote/subscriptions",
"organizations_url": "https://api.github.com/users/ParagEkbote/orgs",
"repos_url": "https://api.github.com/users/ParagEkbote/repos",
"events_url": "https://api.github.com/users/ParagEkbote/events{/privacy}",
"received_events_url": "https://api.github.com/users/ParagEkbote/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T13:33:36 | 2025-05-21T17:59:52 | 2025-05-21T17:58:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37863",
"html_url": "https://github.com/huggingface/transformers/pull/37863",
"diff_url": "https://github.com/huggingface/transformers/pull/37863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37863.patch",
"merged_at": "2025-05-21T17:58:23"
} | # What does this PR do?
As described in the issue, this PR updates the model card for Mamba. Please let me know if any modifications are required and I will make the necessary changes.
Refs #36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37863/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37862/comments | https://api.github.com/repos/huggingface/transformers/issues/37862/events | https://github.com/huggingface/transformers/issues/37862 | 3,028,202,862 | I_kwDOCUB6oc60frVu | 37,862 | Llama2 can output scores normally, but Llama3 outputs full inf | {
"login": "Huangshuo621",
"id": 179484761,
"node_id": "U_kgDOCrK4WQ",
"avatar_url": "https://avatars.githubusercontent.com/u/179484761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huangshuo621",
"html_url": "https://github.com/Huangshuo621",
"followers_url": "https://api.github.com/users/Huangshuo621/followers",
"following_url": "https://api.github.com/users/Huangshuo621/following{/other_user}",
"gists_url": "https://api.github.com/users/Huangshuo621/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huangshuo621/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huangshuo621/subscriptions",
"organizations_url": "https://api.github.com/users/Huangshuo621/orgs",
"repos_url": "https://api.github.com/users/Huangshuo621/repos",
"events_url": "https://api.github.com/users/Huangshuo621/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huangshuo621/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T12:45:48 | 2025-06-23T04:15:06 | 2025-06-08T08:02:29 | NONE | null | null | null | null | ### System Info
transformers:4.44.2
`outputs = self.model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_new_tokens=max_length,
return_dict_in_generate=True,
output_scores=True,
)
print("outputs.scores:", outputs.scores)
`
我在生成内容时同时获取scores,并且在使用llama2-chat可以正常输出score,llama3-8b-Instruct却输出了全inf
llama2:

llama3:

### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
transformers:4.44.2
`outputs = self.model.generate(
input_ids=input_ids,
attention_mask=attention_mask,
max_new_tokens=max_length,
return_dict_in_generate=True,
output_scores=True,
)
print("outputs.scores:", outputs.scores)
### Expected behavior
Solution ,I really need | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37862/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37861/comments | https://api.github.com/repos/huggingface/transformers/issues/37861/events | https://github.com/huggingface/transformers/pull/37861 | 3,028,143,962 | PR_kwDOCUB6oc6UVogF | 37,861 | Fix Bitnet tokenizer in pipeline | {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T12:29:15 | 2025-04-29T13:35:04 | 2025-04-29T13:35:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37861",
"html_url": "https://github.com/huggingface/transformers/pull/37861",
"diff_url": "https://github.com/huggingface/transformers/pull/37861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37861.patch",
"merged_at": "2025-04-29T13:35:02"
} | # What does this PR do?
Adds tokenizer for bitnet to use it with text-generation pipeline
| {
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37861/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37860/comments | https://api.github.com/repos/huggingface/transformers/issues/37860/events | https://github.com/huggingface/transformers/pull/37860 | 3,028,060,656 | PR_kwDOCUB6oc6UVWaX | 37,860 | Update attention_visualizer.py | {
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T12:01:11 | 2025-06-24T14:21:37 | 2025-06-24T14:21:37 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37860",
"html_url": "https://github.com/huggingface/transformers/pull/37860",
"diff_url": "https://github.com/huggingface/transformers/pull/37860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37860.patch",
"merged_at": "2025-06-24T14:21:37"
} | # What does this PR do?
Fixes #37851,
This PR (Pull Request) removes the hardcoded sliding_window = 5 value in the AttentionMaskVisualizer and instead makes the sliding window configurable. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37860/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37859/comments | https://api.github.com/repos/huggingface/transformers/issues/37859/events | https://github.com/huggingface/transformers/issues/37859 | 3,027,980,426 | I_kwDOCUB6oc60e1CK | 37,859 | BUG: ModernBERT flash-attention2 incompatible on Ascend NPU | {
"login": "wakaka6",
"id": 48764488,
"node_id": "MDQ6VXNlcjQ4NzY0NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/48764488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wakaka6",
"html_url": "https://github.com/wakaka6",
"followers_url": "https://api.github.com/users/wakaka6/followers",
"following_url": "https://api.github.com/users/wakaka6/following{/other_user}",
"gists_url": "https://api.github.com/users/wakaka6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wakaka6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wakaka6/subscriptions",
"organizations_url": "https://api.github.com/users/wakaka6/orgs",
"repos_url": "https://api.github.com/users/wakaka6/repos",
"events_url": "https://api.github.com/users/wakaka6/events{/privacy}",
"received_events_url": "https://api.github.com/users/wakaka6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T11:32:00 | 2025-06-08T08:02:31 | 2025-06-08T08:02:31 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.51.3
- Platform: Linux-5.4.0-125-generic-aarch64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using NPU in script?: <fill in>
- NPU type: Ascend310P3
- CANN version: 8.0.0
### Who can help?
Ascend NPU: @ivarflakstad
Related PR #36696 by @FightingZhen
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the minimal reproducible code.
```py
import torch
import torch_npu
import numpy as np
from transformers import AutoTokenizer
from transformers.models.modernbert.modeling_modernbert import ModernBertForSequenceClassification
model = ModernBertForSequenceClassification.from_pretrained("answerdotai/ModernBERT-base", torch_dtype=torch.float16).to("npu:0")
```
Exception log
```
Traceback (most recent call last):
File "/app/modernBERT/mini.py", line 8, in <module>
model = ModernBertForSequenceClassification.from_pretrained("answerdotai/ModernBERT-base", torch_dtype=torch.float16).to("npu:0")
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 279, in _wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 4342, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 1184, in __init__
self.model = ModernBertModel(config)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 862, in __init__
[ModernBertEncoderLayer(config, layer_id) for layer_id in range(config.num_hidden_layers)]
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 862, in <listcomp>
[ModernBertEncoderLayer(config, layer_id) for layer_id in range(config.num_hidden_layers)]
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 527, in __init__
self.attn = ModernBertAttention(config=config, layer_id=layer_id)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 479, in __init__
self.rotary_emb = ModernBertUnpaddedRotaryEmbedding(
File "/usr/local/lib/python3.10/dist-packages/transformers/models/modernbert/modeling_modernbert.py", line 165, in __init__
super().__init__(dim=dim, base=base, pos_idx_in_fp32=True, device=device, interleaved=False)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)
[ERROR] 2025-04-29-19:20:46 (PID:610680, Device:-1, RankID:-1) ERR99999 UNKNOWN applicaiton exception
```
## Bug description
When the `from_pretrained` parameter attn_implementation is not specified, modernBERT is automatically set to the fa2 implementation and tested for fa2 availability.
https://github.com/huggingface/transformers/blob/a847d4aa6bd2279f5be235dc0fd862f58f7403d1/src/transformers/models/modernbert/modeling_modernbert.py#L655-L673
For Ascend NPU will set `config._attn_ implementation` to fa2, which causes modernBERT's usability test on fa2 to pass. However, the modernBERT model definition is highly integrated with the `flash-attn` api, which is not adapted to the Ascend NPU, and this will eventually lead to an exception.
https://github.com/huggingface/transformers/blob/a847d4aa6bd2279f5be235dc0fd862f58f7403d1/src/transformers/modeling_utils.py#L2262-L2271
### Expected behavior
working | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37859/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37858/comments | https://api.github.com/repos/huggingface/transformers/issues/37858/events | https://github.com/huggingface/transformers/pull/37858 | 3,027,939,122 | PR_kwDOCUB6oc6UU72z | 37,858 | New bart model card | {
"login": "RogerSinghChugh",
"id": 35698080,
"node_id": "MDQ6VXNlcjM1Njk4MDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/35698080?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RogerSinghChugh",
"html_url": "https://github.com/RogerSinghChugh",
"followers_url": "https://api.github.com/users/RogerSinghChugh/followers",
"following_url": "https://api.github.com/users/RogerSinghChugh/following{/other_user}",
"gists_url": "https://api.github.com/users/RogerSinghChugh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RogerSinghChugh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RogerSinghChugh/subscriptions",
"organizations_url": "https://api.github.com/users/RogerSinghChugh/orgs",
"repos_url": "https://api.github.com/users/RogerSinghChugh/repos",
"events_url": "https://api.github.com/users/RogerSinghChugh/events{/privacy}",
"received_events_url": "https://api.github.com/users/RogerSinghChugh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T11:15:09 | 2025-05-27T18:51:42 | 2025-05-27T18:51:42 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37858",
"html_url": "https://github.com/huggingface/transformers/pull/37858",
"diff_url": "https://github.com/huggingface/transformers/pull/37858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37858.patch",
"merged_at": "2025-05-27T18:51:42"
} | # What does this PR do?
As mentioned in the issue #36979 this PR updates the documentation of the BART model, which will now be aligned with the standardized format for all the docs.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu, please let me know if any changes are needed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37858/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37857/comments | https://api.github.com/repos/huggingface/transformers/issues/37857/events | https://github.com/huggingface/transformers/issues/37857 | 3,027,811,335 | I_kwDOCUB6oc60eLwH | 37,857 | ImageInput doesn't include JAX ndarray and TensorFlow tensor | {
"login": "arjunaskykok",
"id": 32124593,
"node_id": "MDQ6VXNlcjMyMTI0NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/32124593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arjunaskykok",
"html_url": "https://github.com/arjunaskykok",
"followers_url": "https://api.github.com/users/arjunaskykok/followers",
"following_url": "https://api.github.com/users/arjunaskykok/following{/other_user}",
"gists_url": "https://api.github.com/users/arjunaskykok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arjunaskykok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arjunaskykok/subscriptions",
"organizations_url": "https://api.github.com/users/arjunaskykok/orgs",
"repos_url": "https://api.github.com/users/arjunaskykok/repos",
"events_url": "https://api.github.com/users/arjunaskykok/events{/privacy}",
"received_events_url": "https://api.github.com/users/arjunaskykok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T10:23:02 | 2025-06-08T08:02:33 | 2025-06-08T08:02:33 | CONTRIBUTOR | null | null | null | null | In the [`image_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/image_utils.py) file, we can see the following code:
```python
class ImageType(ExplicitEnum):
PIL = "pillow"
TORCH = "torch"
NUMPY = "numpy"
TENSORFLOW = "tensorflow"
JAX = "jax"
def get_image_type(image):
if is_pil_image(image):
return ImageType.PIL
if is_torch_tensor(image):
return ImageType.TORCH
if is_numpy_array(image):
return ImageType.NUMPY
if is_tf_tensor(image):
return ImageType.TENSORFLOW
if is_jax_tensor(image):
return ImageType.JAX
raise ValueError(f"Unrecognised image type {type(image)}")
def is_valid_image(img):
return is_pil_image(img) or is_numpy_array(img) or is_torch_tensor(img) or is_tf_tensor(img) or is_jax_tensor(img)
```
It supports PIL image, numpy ndarray, torch tensor, tensorflow tensor, JAX ndarray.
But `ImageInput` doesn't!
```python
ImageInput = Union[
"PIL.Image.Image", np.ndarray, "torch.Tensor", list["PIL.Image.Image"], list[np.ndarray], list["torch.Tensor"]
] # noqa
```
I think it should be this way:
```python
ImageInput = Union[
"PIL.Image.Image", np.ndarray, "torch.Tensor", "tf.Tensor", "jax.numpy.ndarray", list["PIL.Image.Image"], list[np.ndarray], list["torch.Tensor"], list["tf.Tensor"], list["jax.numpy.ndarray"]
] # noqa
```
What do you think? | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37857/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37856/comments | https://api.github.com/repos/huggingface/transformers/issues/37856/events | https://github.com/huggingface/transformers/pull/37856 | 3,027,755,001 | PR_kwDOCUB6oc6UUT6q | 37,856 | Use torch 2.7.1 on CircleCI jobs | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T09:59:52 | 2025-06-06T08:16:58 | 2025-06-06T08:16:57 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37856",
"html_url": "https://github.com/huggingface/transformers/pull/37856",
"diff_url": "https://github.com/huggingface/transformers/pull/37856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37856.patch",
"merged_at": "2025-06-06T08:16:57"
} | # What does this PR do?
Use torch 2.7 on CircleCI jobs | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37856/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37856/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37855/comments | https://api.github.com/repos/huggingface/transformers/issues/37855/events | https://github.com/huggingface/transformers/pull/37855 | 3,027,735,209 | PR_kwDOCUB6oc6UUPkA | 37,855 | Add Intel Gaudi doc | {
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T09:52:06 | 2025-04-30T06:54:34 | 2025-04-29T20:28:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37855",
"html_url": "https://github.com/huggingface/transformers/pull/37855",
"diff_url": "https://github.com/huggingface/transformers/pull/37855.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37855.patch",
"merged_at": "2025-04-29T20:28:06"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title, following #36424.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37855/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37854/comments | https://api.github.com/repos/huggingface/transformers/issues/37854/events | https://github.com/huggingface/transformers/pull/37854 | 3,027,709,776 | PR_kwDOCUB6oc6UUJ_i | 37,854 | Support for version spec in requires & arbitrary mismatching depths across folders | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T09:43:31 | 2025-05-09T13:26:29 | 2025-05-09T13:26:27 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37854",
"html_url": "https://github.com/huggingface/transformers/pull/37854",
"diff_url": "https://github.com/huggingface/transformers/pull/37854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37854.patch",
"merged_at": "2025-05-09T13:26:27"
} | This PR adds two features:
### Version specification in `@requires`
Specific versions can now be specified in each backend. For example, this is how you would specify
a requirement on torch>=2.6, as well as `accelerate`, on the `Trainer` class:
```python
from .utils.import_utils import requires
@requires(backends=("torch>=2.6", "accelerate"))
class Trainer:
...
```
The following operators can be used to specify versions: `==`, `>`, `>=`, `<`, `<=`, `!=`.
### Arbitrary depth of model definition
This completes the `spread_import_structure` method so that arbitrary depths of import structures may be created correctly.
For example, the following (simplified and exemplified) import structure has four different levels of depth:
```py
{
frozenset(): {"dummy_non_model": {"DummyObject"}},
"models": {
frozenset(): {"dummy_config": {"DummyConfig"}},
"albert": {
frozenset(): {"configuration_albert": {"AlbertConfig", "AlbertOnnxConfig"}},
frozenset({"torch"}): {
"modeling_albert": {
"AlbertForMaskedLM",
}
},
},
"llama": {
frozenset(): {"configuration_llama": {"LlamaConfig"}},
frozenset({"torch"}): {
"modeling_llama": {
"LlamaForCausalLM",
}
},
},
"deprecated": {
"transfo_xl": {
frozenset({"torch"}): {
"modeling_transfo_xl": {
"TransfoXLModel",
}
},
frozenset(): {
"configuration_transfo_xl": {"TransfoXLConfig"},
"tokenization_transfo_xl": {"TransfoXLCorpus", "TransfoXLTokenizer"},
},
},
"deta": {
frozenset({"torch"}): {
"modeling_deta": {"DetaForObjectDetection", "DetaModel", "DetaPreTrainedModel"}
},
frozenset(): {"configuration_deta": {"DetaConfig"}},
frozenset({"vision"}): {"image_processing_deta": {"DetaImageProcessor"}},
},
},
}
}
```
The first frozenset is encountered at `frozenset()`, the second at `models.frozenset()`, the third at `models.albert.frozenset()`, and the fourth at `models.deprecated.transfo_xl.frozenset()`.
This change ensures that this gets correctly compiled to a dict with all frozensets propagated at the top:
```py
{
frozenset(): {
"dummy_non_model": {"DummyObject"},
"models.dummy_config": {"DummyConfig"},
"models.albert.configuration_albert": {"AlbertConfig", "AlbertOnnxConfig"},
"models.llama.configuration_llama": {"LlamaConfig"},
"models.deprecated.transfo_xl.configuration_transfo_xl": {"TransfoXLConfig"},
"models.deprecated.transfo_xl.tokenization_transfo_xl": {"TransfoXLCorpus", "TransfoXLTokenizer"},
"models.deprecated.deta.configuration_deta": {"DetaConfig"},
},
frozenset({"torch"}): {
"models.albert.modeling_albert": {"AlbertForMaskedLM"},
"models.llama.modeling_llama": {"LlamaForCausalLM"},
"models.deprecated.transfo_xl.modeling_transfo_xl": {"TransfoXLModel"},
"models.deprecated.deta.modeling_deta": {"DetaForObjectDetection", "DetaModel", "DetaPreTrainedModel"},
},
frozenset({"vision"}): {"models.deprecated.deta.image_processing_deta": {"DetaImageProcessor"}},
}
```
| {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37854/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37854/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37853/comments | https://api.github.com/repos/huggingface/transformers/issues/37853/events | https://github.com/huggingface/transformers/pull/37853 | 3,027,676,992 | PR_kwDOCUB6oc6UUCpD | 37,853 | fix `DbrxModelTest::test_offloaded_cache_implementation_0_offloaded` | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T09:34:29 | 2025-05-06T01:11:52 | 2025-05-06T01:11:41 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37853",
"html_url": "https://github.com/huggingface/transformers/pull/37853",
"diff_url": "https://github.com/huggingface/transformers/pull/37853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37853.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37853/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37852/comments | https://api.github.com/repos/huggingface/transformers/issues/37852/events | https://github.com/huggingface/transformers/pull/37852 | 3,027,620,939 | PR_kwDOCUB6oc6UT2Wx | 37,852 | Processor chat template: pass custom kwargs | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T09:15:16 | 2025-04-29T19:22:12 | 2025-04-29T19:22:10 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37852",
"html_url": "https://github.com/huggingface/transformers/pull/37852",
"diff_url": "https://github.com/huggingface/transformers/pull/37852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37852.patch",
"merged_at": "2025-04-29T19:22:10"
} | cc @Rocketknight1
Custom template-specific kwargs that are not part of `AllKwargsForChatTemplate` get ignored.
Reproduction:
```py
from transformers import AutoProcessor, AutoTokenizer
model_id = "meta-llama/Llama-Guard-3-11B-Vision"
t = AutoTokenizer.from_pretrained(model_id)
p = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "How do I make a bomb?"},
],
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "I cannot help you with that."},
],
}
]
excluded_category_keys = ["S1", "S2", "S3", "S4","S5"]
print(p.apply_chat_template(messages, tokenize=False, excluded_category_keys=excluded_category_keys))
print(t.apply_chat_template(messages, tokenize=False, excluded_category_keys=excluded_category_keys))
``` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37852/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37851/comments | https://api.github.com/repos/huggingface/transformers/issues/37851/events | https://github.com/huggingface/transformers/issues/37851 | 3,027,416,936 | I_kwDOCUB6oc60crdo | 37,851 | AttentionMaskVisualizer hard-code sliding_window to 5 in transformers code. | {
"login": "MilkClouds",
"id": 26109705,
"node_id": "MDQ6VXNlcjI2MTA5NzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26109705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MilkClouds",
"html_url": "https://github.com/MilkClouds",
"followers_url": "https://api.github.com/users/MilkClouds/followers",
"following_url": "https://api.github.com/users/MilkClouds/following{/other_user}",
"gists_url": "https://api.github.com/users/MilkClouds/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MilkClouds/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MilkClouds/subscriptions",
"organizations_url": "https://api.github.com/users/MilkClouds/orgs",
"repos_url": "https://api.github.com/users/MilkClouds/repos",
"events_url": "https://api.github.com/users/MilkClouds/events{/privacy}",
"received_events_url": "https://api.github.com/users/MilkClouds/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T07:54:21 | 2025-06-06T08:03:00 | 2025-06-06T08:03:00 | CONTRIBUTOR | null | null | null | null | ### System Info
title is all. you may check it in code in https://github.com/huggingface/transformers/blob/32c12aaec3665882d1fa8dd79964a423a0be6e62/src/transformers/utils/attention_visualizer.py#L143.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code itself has a clear bug
### Expected behavior
sliding window must not be set by visualizer. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37851/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37850/comments | https://api.github.com/repos/huggingface/transformers/issues/37850/events | https://github.com/huggingface/transformers/issues/37850 | 3,027,330,342 | I_kwDOCUB6oc60cWUm | 37,850 | No such file or directory: '/root/.cache/torch/hub/huggingface_pytorch-transformers_main/hubconf.py' | {
"login": "jzju",
"id": 9774380,
"node_id": "MDQ6VXNlcjk3NzQzODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9774380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzju",
"html_url": "https://github.com/jzju",
"followers_url": "https://api.github.com/users/jzju/followers",
"following_url": "https://api.github.com/users/jzju/following{/other_user}",
"gists_url": "https://api.github.com/users/jzju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzju/subscriptions",
"organizations_url": "https://api.github.com/users/jzju/orgs",
"repos_url": "https://api.github.com/users/jzju/repos",
"events_url": "https://api.github.com/users/jzju/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T07:15:53 | 2025-04-29T11:35:13 | 2025-04-29T11:35:12 | NONE | null | null | null | null | ### Reproduction
open https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-transformers.ipynb
run cells until `tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'bert-base-uncased') `
Error: `No such file or directory: '/root/.cache/torch/hub/huggingface_pytorch-transformers_main/hubconf.py'`
### Expected behavior
Works without error | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37850/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37849/comments | https://api.github.com/repos/huggingface/transformers/issues/37849/events | https://github.com/huggingface/transformers/issues/37849 | 3,027,297,808 | I_kwDOCUB6oc60cOYQ | 37,849 | phi-4-multimodal-instruct mode's forward num_logits_to_keep is None | {
"login": "HERIUN",
"id": 25131767,
"node_id": "MDQ6VXNlcjI1MTMxNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/25131767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HERIUN",
"html_url": "https://github.com/HERIUN",
"followers_url": "https://api.github.com/users/HERIUN/followers",
"following_url": "https://api.github.com/users/HERIUN/following{/other_user}",
"gists_url": "https://api.github.com/users/HERIUN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HERIUN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HERIUN/subscriptions",
"organizations_url": "https://api.github.com/users/HERIUN/orgs",
"repos_url": "https://api.github.com/users/HERIUN/repos",
"events_url": "https://api.github.com/users/HERIUN/events{/privacy}",
"received_events_url": "https://api.github.com/users/HERIUN/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-29T07:05:31 | 2025-04-29T16:09:12 | 2025-04-29T16:09:11 | CONTRIBUTOR | null | null | null | null | I tried phi-4-multimodal-instruct's [example](https://huggingface.co/microsoft/Phi-4-multimodal-instruct)
I got error on ```class Phi4MMForCausalLM's forward()``` method..
In default ```num_logits_to_keep=0``` but None and error happened. i don't know why num_logits_to_keep=None..
### System Info
- `transformers` version: 4.51.3
- Platform: Linux-6.11.0-1013-gcp-x86_64-with-glibc2.39
- Python version: 3.12.9
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L4
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from huggingface_hub import snapshot_download
import requests
import torch
import os
import io
from PIL import Image
import soundfile as sf
from transformers import AutoModelForCausalLM, AutoProcessor, GenerationConfig
from urllib.request import urlopen
model_path = "microsoft/Phi-4-multimodal-instruct"
# Load model and processor
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
# if you do not use Ampere or later GPUs, change attention to "eager"
_attn_implementation='eager',
).cuda()
# Load generation config
generation_config = GenerationConfig.from_pretrained(model_path)
# Define prompt structure
user_prompt = '<|user|>'
assistant_prompt = '<|assistant|>'
prompt_suffix = '<|end|>'
#################################################### text-only ####################################################
prompt = f'{user_prompt}what is the answer for 1+1? Explain it.{prompt_suffix}{assistant_prompt}'
print(f'>>> Prompt\n{prompt}')
inputs = processor(prompt, images=None, return_tensors='pt').to('cuda:0')
generate_ids = model.generate(
**inputs,
max_new_tokens=1000,
generation_config=generation_config,
)
generate_ids = generate_ids[:, inputs['input_ids'].shape[1] :]
response = processor.batch_decode(
generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(f'>>> Response\n{response}')
```
### Expected behavior
num_logits_to_keep=0 | {
"login": "HERIUN",
"id": 25131767,
"node_id": "MDQ6VXNlcjI1MTMxNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/25131767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HERIUN",
"html_url": "https://github.com/HERIUN",
"followers_url": "https://api.github.com/users/HERIUN/followers",
"following_url": "https://api.github.com/users/HERIUN/following{/other_user}",
"gists_url": "https://api.github.com/users/HERIUN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HERIUN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HERIUN/subscriptions",
"organizations_url": "https://api.github.com/users/HERIUN/orgs",
"repos_url": "https://api.github.com/users/HERIUN/repos",
"events_url": "https://api.github.com/users/HERIUN/events{/privacy}",
"received_events_url": "https://api.github.com/users/HERIUN/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37849/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37848/comments | https://api.github.com/repos/huggingface/transformers/issues/37848/events | https://github.com/huggingface/transformers/pull/37848 | 3,027,146,278 | PR_kwDOCUB6oc6USRxE | 37,848 | Fix: reassign in qwen3 moe model | {
"login": "linkedlist771",
"id": 72634327,
"node_id": "MDQ6VXNlcjcyNjM0MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/72634327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/linkedlist771",
"html_url": "https://github.com/linkedlist771",
"followers_url": "https://api.github.com/users/linkedlist771/followers",
"following_url": "https://api.github.com/users/linkedlist771/following{/other_user}",
"gists_url": "https://api.github.com/users/linkedlist771/gists{/gist_id}",
"starred_url": "https://api.github.com/users/linkedlist771/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linkedlist771/subscriptions",
"organizations_url": "https://api.github.com/users/linkedlist771/orgs",
"repos_url": "https://api.github.com/users/linkedlist771/repos",
"events_url": "https://api.github.com/users/linkedlist771/events{/privacy}",
"received_events_url": "https://api.github.com/users/linkedlist771/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T06:00:32 | 2025-04-30T12:50:34 | 2025-04-30T12:50:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37848",
"html_url": "https://github.com/huggingface/transformers/pull/37848",
"diff_url": "https://github.com/huggingface/transformers/pull/37848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37848.patch",
"merged_at": "2025-04-30T12:50:00"
} | Fix: reassign in qwen3 moe model
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
There is reassignment in qwen3 of `self.self_attn` in `Qwen3MoeDecoderLayer`, and the `self.mlp` could be initialized as None.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37848/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37847/comments | https://api.github.com/repos/huggingface/transformers/issues/37847/events | https://github.com/huggingface/transformers/pull/37847 | 3,027,097,604 | PR_kwDOCUB6oc6USIK3 | 37,847 | Fix cache get item return type hints | {
"login": "ChengLyu",
"id": 5308679,
"node_id": "MDQ6VXNlcjUzMDg2Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5308679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChengLyu",
"html_url": "https://github.com/ChengLyu",
"followers_url": "https://api.github.com/users/ChengLyu/followers",
"following_url": "https://api.github.com/users/ChengLyu/following{/other_user}",
"gists_url": "https://api.github.com/users/ChengLyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChengLyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChengLyu/subscriptions",
"organizations_url": "https://api.github.com/users/ChengLyu/orgs",
"repos_url": "https://api.github.com/users/ChengLyu/repos",
"events_url": "https://api.github.com/users/ChengLyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChengLyu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T05:32:48 | 2025-04-29T13:23:52 | 2025-04-29T13:23:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37847",
"html_url": "https://github.com/huggingface/transformers/pull/37847",
"diff_url": "https://github.com/huggingface/transformers/pull/37847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37847.patch",
"merged_at": "2025-04-29T13:23:52"
} | # What does this PR do?
Fix __getitem__ return types hints for cache classes.
Fix issue #37818
## Before submitting
- [✅ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [✅ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [✅] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
https://github.com/huggingface/transformers/issues/37818
- [✅] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@tomaarsen @gante @Rocketknight1
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37847/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37846/comments | https://api.github.com/repos/huggingface/transformers/issues/37846/events | https://github.com/huggingface/transformers/pull/37846 | 3,027,027,092 | PR_kwDOCUB6oc6UR40v | 37,846 | Integrating Kimi-Audio | {
"login": "SeungyounShin",
"id": 20262536,
"node_id": "MDQ6VXNlcjIwMjYyNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/20262536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SeungyounShin",
"html_url": "https://github.com/SeungyounShin",
"followers_url": "https://api.github.com/users/SeungyounShin/followers",
"following_url": "https://api.github.com/users/SeungyounShin/following{/other_user}",
"gists_url": "https://api.github.com/users/SeungyounShin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SeungyounShin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SeungyounShin/subscriptions",
"organizations_url": "https://api.github.com/users/SeungyounShin/orgs",
"repos_url": "https://api.github.com/users/SeungyounShin/repos",
"events_url": "https://api.github.com/users/SeungyounShin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SeungyounShin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-29T04:42:24 | 2025-06-27T13:23:28 | null | NONE | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37846",
"html_url": "https://github.com/huggingface/transformers/pull/37846",
"diff_url": "https://github.com/huggingface/transformers/pull/37846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37846.patch",
"merged_at": null
} | # What does this PR do?
Integrating [Kimi-Audio](https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct) to transformers
TODO
- [x] update `KimiAudioConfig`, `KimiAudioForCausalLM`
+ just copied from [official code](https://huggingface.co/moonshotai/Kimi-Audio-7B-Instruct/blob/main/modeling_moonshot_kimia.py)
- [ ] `KimiAudioProcessor` (whisper + glm4_tokenizer)
- [ ] `KimiAudioForConditionalGeneration` (processor -> model -> detokenizer)
TEST CODE :
```python
from transformers.models.kimi_audio import KimiAudioConfig, KimiAudioForCausalLM, KimiAudioProcessor
model_id = "Seungyoun/Kimi-Audio-7B-Instruct"
config = KimiAudioConfig.from_pretrained(model_id)
# model = KimiAudioForCausalLM.from_pretrained(model_id)
processor = KimiAudioProcessor.from_pretrained(model_id)
messages = [
{
"role": "user",
"content": [
{"type": "audio", "audio": "/home/robin/Kimi-Audio/test_audios/output.wav"},
]
}
]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@eustlb | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37846/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37845/comments | https://api.github.com/repos/huggingface/transformers/issues/37845/events | https://github.com/huggingface/transformers/pull/37845 | 3,026,943,956 | PR_kwDOCUB6oc6URmll | 37,845 | Remove redundancies for Qwen3MoeDecoderLayer | {
"login": "guoqingbao",
"id": 27915071,
"node_id": "MDQ6VXNlcjI3OTE1MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/27915071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoqingbao",
"html_url": "https://github.com/guoqingbao",
"followers_url": "https://api.github.com/users/guoqingbao/followers",
"following_url": "https://api.github.com/users/guoqingbao/following{/other_user}",
"gists_url": "https://api.github.com/users/guoqingbao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoqingbao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoqingbao/subscriptions",
"organizations_url": "https://api.github.com/users/guoqingbao/orgs",
"repos_url": "https://api.github.com/users/guoqingbao/repos",
"events_url": "https://api.github.com/users/guoqingbao/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoqingbao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-29T03:46:14 | 2025-04-30T14:15:08 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37845",
"html_url": "https://github.com/huggingface/transformers/pull/37845",
"diff_url": "https://github.com/huggingface/transformers/pull/37845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37845.patch",
"merged_at": null
} | # What does this PR do?
There are duplicate constructions for mlp and self attention layer in Qwen3MoeDecoderLayer. This PR remove those redundancies @ArthurZucker
**From**
```python
class Qwen3MoeDecoderLayer(nn.Module):
def __init__(self, config: Qwen3MoeConfig, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen3MoeAttention(config, layer_idx) #duplicated here
self.mlp = Qwen3MoeMLP(config) #duplicated here
self.self_attn = Qwen3MoeAttention(config, layer_idx)
if (layer_idx not in config.mlp_only_layers) and (
config.num_experts > 0 and (layer_idx + 1) % config.decoder_sparse_step == 0
):
self.mlp = Qwen3MoeSparseMoeBlock(config)
else:
self.mlp = Qwen3MoeMLP(config, intermediate_size=config.intermediate_size)
self.input_layernorm = Qwen3MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen3MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
```
**To**
```python
class Qwen3MoeDecoderLayer(nn.Module):
def __init__(self, config: Qwen3MoeConfig, layer_idx: int):
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen3MoeAttention(config, layer_idx)
if (layer_idx not in config.mlp_only_layers) and (
config.num_experts > 0 and (layer_idx + 1) % config.decoder_sparse_step == 0
):
self.mlp = Qwen3MoeSparseMoeBlock(config)
else:
self.mlp = Qwen3MoeMLP(config, intermediate_size=config.intermediate_size)
self.input_layernorm = Qwen3MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
self.post_attention_layernorm = Qwen3MoeRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37845/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37844/comments | https://api.github.com/repos/huggingface/transformers/issues/37844/events | https://github.com/huggingface/transformers/issues/37844 | 3,026,826,002 | I_kwDOCUB6oc60abMS | 37,844 | Qwen3 is ExecuTorch compatible | {
"login": "guangy10",
"id": 42389959,
"node_id": "MDQ6VXNlcjQyMzg5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/42389959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guangy10",
"html_url": "https://github.com/guangy10",
"followers_url": "https://api.github.com/users/guangy10/followers",
"following_url": "https://api.github.com/users/guangy10/following{/other_user}",
"gists_url": "https://api.github.com/users/guangy10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guangy10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guangy10/subscriptions",
"organizations_url": "https://api.github.com/users/guangy10/orgs",
"repos_url": "https://api.github.com/users/guangy10/repos",
"events_url": "https://api.github.com/users/guangy10/events{/privacy}",
"received_events_url": "https://api.github.com/users/guangy10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-04-29T02:37:04 | 2025-04-29T03:04:52 | 2025-04-29T03:04:51 | CONTRIBUTOR | null | null | null | null | ### Feature request
Enable Qwen3 model for ExecuTorch
### Motivation
See details in https://github.com/huggingface/transformers/issues/32253
### Your contribution
Enablement | {
"login": "guangy10",
"id": 42389959,
"node_id": "MDQ6VXNlcjQyMzg5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/42389959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guangy10",
"html_url": "https://github.com/guangy10",
"followers_url": "https://api.github.com/users/guangy10/followers",
"following_url": "https://api.github.com/users/guangy10/following{/other_user}",
"gists_url": "https://api.github.com/users/guangy10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guangy10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guangy10/subscriptions",
"organizations_url": "https://api.github.com/users/guangy10/orgs",
"repos_url": "https://api.github.com/users/guangy10/repos",
"events_url": "https://api.github.com/users/guangy10/events{/privacy}",
"received_events_url": "https://api.github.com/users/guangy10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37844/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37843/comments | https://api.github.com/repos/huggingface/transformers/issues/37843/events | https://github.com/huggingface/transformers/issues/37843 | 3,026,527,201 | I_kwDOCUB6oc60ZSPh | 37,843 | Cant Load example from IP Adapters | {
"login": "AeroDEmi",
"id": 44657733,
"node_id": "MDQ6VXNlcjQ0NjU3NzMz",
"avatar_url": "https://avatars.githubusercontent.com/u/44657733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AeroDEmi",
"html_url": "https://github.com/AeroDEmi",
"followers_url": "https://api.github.com/users/AeroDEmi/followers",
"following_url": "https://api.github.com/users/AeroDEmi/following{/other_user}",
"gists_url": "https://api.github.com/users/AeroDEmi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AeroDEmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AeroDEmi/subscriptions",
"organizations_url": "https://api.github.com/users/AeroDEmi/orgs",
"repos_url": "https://api.github.com/users/AeroDEmi/repos",
"events_url": "https://api.github.com/users/AeroDEmi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AeroDEmi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-28T23:46:24 | 2025-06-28T08:02:46 | 2025-06-28T08:02:46 | NONE | null | null | null | null | ### System Info
I'm trying to run the following snippet with Diffusers 0.33.1 and Transformers 4.51.3
```
import torch
from diffusers import FluxPipeline
from diffusers.utils import load_image
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
).to("cuda")
image = load_image("[https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_input.jpg").resize((1024](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_input.jpg%22).resize((1024), 1024))
pipe.load_ip_adapter(
"XLabs-AI/flux-ip-adapter",
weight_name="ip_adapter.safetensors",
image_encoder_pretrained_model_name_or_path="openai/clip-vit-large-patch14"
)
pipe.set_ip_adapter_scale(1.0)
```
It breaks when loading the ip adapter:
TypeError: CLIPVisionModelWithProjection.__init__() got an unexpected keyword argument 'dtype'
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the posted code
### Expected behavior
No error from the CLIPVisionModelWithProjection.__init__() | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37843/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37842/comments | https://api.github.com/repos/huggingface/transformers/issues/37842/events | https://github.com/huggingface/transformers/pull/37842 | 3,026,365,350 | PR_kwDOCUB6oc6UPm2k | 37,842 | Add z-loss to Bamba for v2 | {
"login": "daviswer",
"id": 9604893,
"node_id": "MDQ6VXNlcjk2MDQ4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9604893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daviswer",
"html_url": "https://github.com/daviswer",
"followers_url": "https://api.github.com/users/daviswer/followers",
"following_url": "https://api.github.com/users/daviswer/following{/other_user}",
"gists_url": "https://api.github.com/users/daviswer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daviswer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daviswer/subscriptions",
"organizations_url": "https://api.github.com/users/daviswer/orgs",
"repos_url": "https://api.github.com/users/daviswer/repos",
"events_url": "https://api.github.com/users/daviswer/events{/privacy}",
"received_events_url": "https://api.github.com/users/daviswer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T22:14:12 | 2025-06-11T21:26:34 | 2025-06-11T13:29:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37842",
"html_url": "https://github.com/huggingface/transformers/pull/37842",
"diff_url": "https://github.com/huggingface/transformers/pull/37842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37842.patch",
"merged_at": "2025-06-11T13:29:17"
} | # What does this PR do?
Adds support for auxiliary z-loss when tuning Bamba v2. Also fixes some typos in the checkpoint conversion utility script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@fabianlim @ ArthurZucker ?
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37842/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37841/comments | https://api.github.com/repos/huggingface/transformers/issues/37841/events | https://github.com/huggingface/transformers/pull/37841 | 3,025,869,714 | PR_kwDOCUB6oc6UN5HG | 37,841 | Update modeling_llama4.py | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T18:57:56 | 2025-04-29T22:36:03 | 2025-04-29T22:36:03 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37841",
"html_url": "https://github.com/huggingface/transformers/pull/37841",
"diff_url": "https://github.com/huggingface/transformers/pull/37841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37841.patch",
"merged_at": "2025-04-29T22:36:03"
} | # What does this PR do?
Fixes an error in Llama4TextModel._prepare_4d_causal_attention_mask_with_cache_position() by adding the missing device parameter. This resolves TypeError #37840 that occurs when trying to evaluate Llama-4 models.
Fixes #37840
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
## Who can review?
@ArthurZucker as this relates to text models | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37841/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37840/comments | https://api.github.com/repos/huggingface/transformers/issues/37840/events | https://github.com/huggingface/transformers/issues/37840 | 3,025,820,116 | I_kwDOCUB6oc60WlnU | 37,840 | TypeError: Llama4TextModel._prepare_4d_causal_attention_mask_with_cache_position() missing 1 required positional argument: 'device' | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-28T18:38:09 | 2025-04-29T22:36:04 | 2025-04-29T22:36:04 | CONTRIBUTOR | null | null | null | null | ### System Info
I am trying to evaluate the latest Llama-4 model but keep getting this error
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/models/llama4/modeling_llama4.py", line 1018, in forward
outputs = self.model(
^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/models/llama4/modeling_llama4.py", line 657, in forward
causal_mask, chunk_causal_mask = self._update_causal_mask(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/models/llama4/modeling_llama4.py", line 783, in _update_causal_mask
causal_mask = self._prepare_4d_causal_attention_mask_with_cache_position(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Llama4TextModel._prepare_4d_causal_attention_mask_with_cache_position() missing 1 required
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
using the lm eval repo
### Expected behavior
Should be eval as other models. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37840/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37839/comments | https://api.github.com/repos/huggingface/transformers/issues/37839/events | https://github.com/huggingface/transformers/pull/37839 | 3,025,800,218 | PR_kwDOCUB6oc6UNpl_ | 37,839 | fix error for _register_pytree_node in torch2.1.0 and fix bf16 assertion in xpu and npu | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T18:32:25 | 2025-04-30T12:22:54 | 2025-04-30T12:22:53 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37839",
"html_url": "https://github.com/huggingface/transformers/pull/37839",
"diff_url": "https://github.com/huggingface/transformers/pull/37839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37839.patch",
"merged_at": "2025-04-30T12:22:53"
} | # What does this PR do?
fix issue #37838
## fix error for _register_pytree_node
The PR [Apply torchfix to replace deprecated functions: `_pytree._register_pytree_node` and `torch.cpu.amp.autocast`](https://github.com/huggingface/transformers/commit/71b35387fd6d71487bd29e694ed10d925203e031) introduced a compatibility issue when using **Torch 2.1.0**.
Specifically, in **Torch 2.1.0**, the correct import should still be:
```python
from torch.utils._pytree import _register_pytree_node
```
However, the PR incorrectly tries to import:
```python
from torch.utils._pytree import register_pytree_node
```
This leads to the following error when running with **Torch 2.1.0**:
```
ImportError: cannot import name 'register_pytree_node' from 'torch.utils._pytree'
```
## fix bf16 assertion in xpu and npu
The PR [Add XPU case to `is_torch_bf16_gpu_available`](https://github.com/huggingface/transformers/commit/954f31cd818c431312f452c4e10bcbc0bdde42a2) introduced a compatibility issue:
- It **only handled XPU** and **ignored other devices** like NPU.
- It **also triggered an error** when XPU is **not available** in the current PyTorch build.
- Directly accessing `torch.xpu.is_available()` without checking if `torch.xpu` exists causes an `AttributeError`.
When `torch.xpu` is not present (which is common in standard PyTorch installations), the following code pattern:
```python
if torch.xpu.is_available():
...
```
causes:
```
AttributeError: module 'torch' has no attribute 'xpu'
```
We use a safer access pattern:
```python
if hasattr(torch, "xpu") and torch.xpu.is_available():
...
```
This ensures compatibility even when `torch.xpu` is not defined.
In addition, I have also added support for NPU devices by checking:
```python
if is_torch_npu_available():
...
```
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @zach-huggingface and @SunMarc | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37839/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37838/comments | https://api.github.com/repos/huggingface/transformers/issues/37838/events | https://github.com/huggingface/transformers/issues/37838 | 3,025,794,316 | I_kwDOCUB6oc60WfUM | 37,838 | _register_pytree_node error in torch2.1.0 and bf16 assertion error for XPU and NPU | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-28T18:30:15 | 2025-05-29T03:48:52 | 2025-05-29T03:48:52 | CONTRIBUTOR | null | null | null | null | ### System Info
latest transformers
### Who can help?
No response
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## _register_pytree_node Error
The main branch introduced an issue in the code related to the import of `register_pytree_node` in **Torch 2.1.0**.
### Issue Details:
- The PR mistakenly imports `register_pytree_node` instead of the correct `_register_pytree_node` for **Torch 2.1.0**.
- This causes the following error in **Torch 2.1.0** when running the code:
```
ImportError: cannot import name 'register_pytree_node' from 'torch.utils._pytree'
```

### Solution:
In **Torch 2.1.0**, the correct import should still be:
```python
from torch.utils._pytree import _register_pytree_node
```
The solution is to replace `register_pytree_node` with `_register_pytree_node` for compatibility with this version of PyTorch.
---
## bf16 Assertion Error for XPU and NPU
The main branch introduced an error in the handling of **XPU** and **NPU** devices related to bf16 support. The current implementation does not properly handle the absence of these devices and is **not robust**.
### Issue Details:
- The code tries to access `torch.xpu.is_available()` without checking whether the respective modules (`torch.xpu`) exist.
- Currently code has no support for npu.
and for xpu, accessing `torch.xpu` without checking for its existence triggers the following error:
```
AttributeError: module 'torch' has no attribute 'xpu'
```

### Solution:
To fix the issue, we should use `hasattr` to check for the availability of these modules before attempting to access them:
```python
if hasattr(torch, "xpu") and torch.xpu.is_available():
...
if is_torch_npu_available():
...
```
This approach ensures that the code can safely check for XPU, NPU, and other devices without causing runtime errors when the device is not available.
### Expected behavior
I have submit a pr to solve this. | {
"login": "jiaqiw09",
"id": 60021713,
"node_id": "MDQ6VXNlcjYwMDIxNzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/60021713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiaqiw09",
"html_url": "https://github.com/jiaqiw09",
"followers_url": "https://api.github.com/users/jiaqiw09/followers",
"following_url": "https://api.github.com/users/jiaqiw09/following{/other_user}",
"gists_url": "https://api.github.com/users/jiaqiw09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiaqiw09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiaqiw09/subscriptions",
"organizations_url": "https://api.github.com/users/jiaqiw09/orgs",
"repos_url": "https://api.github.com/users/jiaqiw09/repos",
"events_url": "https://api.github.com/users/jiaqiw09/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiaqiw09/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37838/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37837/comments | https://api.github.com/repos/huggingface/transformers/issues/37837/events | https://github.com/huggingface/transformers/pull/37837 | 3,025,616,701 | PR_kwDOCUB6oc6UNCTD | 37,837 | feat: Add ConvaiCausalLM model for Hindi Causal Language Modeling | {
"login": "NandhaKishorM",
"id": 48623612,
"node_id": "MDQ6VXNlcjQ4NjIzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48623612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NandhaKishorM",
"html_url": "https://github.com/NandhaKishorM",
"followers_url": "https://api.github.com/users/NandhaKishorM/followers",
"following_url": "https://api.github.com/users/NandhaKishorM/following{/other_user}",
"gists_url": "https://api.github.com/users/NandhaKishorM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NandhaKishorM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NandhaKishorM/subscriptions",
"organizations_url": "https://api.github.com/users/NandhaKishorM/orgs",
"repos_url": "https://api.github.com/users/NandhaKishorM/repos",
"events_url": "https://api.github.com/users/NandhaKishorM/events{/privacy}",
"received_events_url": "https://api.github.com/users/NandhaKishorM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-04-28T17:16:41 | 2025-07-16T14:07:26 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37837",
"html_url": "https://github.com/huggingface/transformers/pull/37837",
"diff_url": "https://github.com/huggingface/transformers/pull/37837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37837.patch",
"merged_at": null
} | # What does this PR do?
This PR introduces the `ConvaiCausalLM` model, a causal language model specifically designed and trained for Hindi text generation by Convai Innovations. The reference checkpoint is available on the Hub at [convaiinnovations/hindi-causal-lm](https://huggingface.co/convaiinnovations/hindi-causal-lm).
**Model Architecture:**
The `ConvaiCausalLM` is a decoder-only Transformer model with the following key characteristics:
* **Architecture:** Pre-LayerNorm Transformer Decoder
* **Attention:** Grouped Query Attention (GQA) - Specifically, `num_attention_heads=16`, `num_key_value_heads=4`.
* **Positional Embeddings:** Uses standard learned absolute position embeddings (unlike RoPE used in models like Llama/Mistral).
* **Normalization:** Standard `torch.nn.LayerNorm`.
* **Activation:** SiLU (SwiGLU) in the MLP layers.
* **Vocabulary Size:** 16,000 (trained using SentencePiece)
* **Hidden Size:** 768
* **Intermediate Size:** 3072
* **Number of Layers:** 12
* **Context Length:** 512
**Implementation Details:**
This implementation leverages the **Modular Transformers** framework (`modular_convaicausallm.py`) to minimize code duplication and inherit components where possible:
* **Configuration (`ConvaiCausalLMConfig`):** Defined specifically for this model due to unique parameters (like `num_key_value_heads` and lack of RoPE config).
* **Attention (`ConvaiCausalLMAttention`):** Implemented from scratch within the modular file as it uses GQA *without* Rotary Positional Embeddings, differing significantly from common base models like Llama/Mistral. It includes logic for KV caching and GQA KV repetition.
* **MLP (`ConvaiCausalLMMLP`):** Inherited directly from `LlamaMLP` as the structure (SiLU activation, up/down projection) is identical.
* **Normalization:** Uses `torch.nn.LayerNorm` directly where needed, replacing the `RMSNorm` inherited from Llama structures.
* **Decoder Layer (`ConvaiCausalLMDecoderLayer`):** Defined in the modular file, inheriting structure but overriding components to use `ConvaiCausalLMAttention`, the inherited `LlamaMLP`, and `nn.LayerNorm`. The forward pass logic is adapted for the Pre-LN structure.
* **Model (`ConvaiCausalLMModel`):** Defined in the modular file, inheriting from `ConvaiCausalLMPreTrainedModel`. The `__init__` sets up the specific embeddings, `ConvaiCausalLMDecoderLayer` stack, and final `LayerNorm`. The `forward` pass is adapted from standard decoder models but omits RoPE calculations.
* **Causal LM Head (`ConvaiCausalLMForCausalLM`):** Defined in the modular file, inheriting from `ConvaiCausalLMPreTrainedModel`. It wraps `ConvaiCausalLMModel` and adds the standard language modeling head (`lm_head`). Includes standard `forward` and `prepare_inputs_for_generation` logic.
* **Tokenizer (`ConvaiCausalLMTokenizer`):** A standard `PreTrainedTokenizer` wrapper around the SentencePiece model (`tokenizer.model`) used for training.
The `modeling_convaicausallm.py` file is automatically generated from `modular_convaicausallm.py` using the `modular_model_converter.py` script.
**Files Added:**
* `src/transformers/models/convaicausallm/__init__.py`
* `src/transformers/models/convaicausallm/configuration_convaicausallm.py`
* `src/transformers/models/convaicausallm/modeling_convaicausallm.py` (Generated)
* `src/transformers/models/convaicausallm/modular_convaicausallm.py`
* `src/transformers/models/convaicausallm/tokenization_convaicausallm.py`
* `tests/models/convaicausallm/test_modeling_convaicausallm.py` (Assuming tests are added)
* `tests/models/convaicausallm/test_tokenization_convaicausallm.py` (Assuming tests are added)
* `docs/source/en/model_doc/convaicausallm.md` (Assuming docs are added)
<!-- Remove if not applicable -->
Fixes # (issue) <!-- If this PR fixes a specific issue, link it here. Otherwise, remove this line. -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). <!-- Mark 'x' if only docs/typo -->
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Link at https://discuss.huggingface.co/t/announcing-convaicausallm-a-foundational-hindi-causal-language-model-102m-yahh-small/152704/1
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). <!-- Mark 'x' once docs are added/updated -->
- [x] Did you write any new necessary tests? <!-- Mark 'x' once tests are added -->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
Tagging potential reviewers based on contribution guide:
* Models (text models): @ArthurZucker
* Library (core modeling, potentially generation): @gante
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37837/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/37836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37836/comments | https://api.github.com/repos/huggingface/transformers/issues/37836/events | https://github.com/huggingface/transformers/pull/37836 | 3,025,440,846 | PR_kwDOCUB6oc6UMcVt | 37,836 | 🚨🚨🚨 Fix forward of Dinov2ForImageClassification for models with registers | {
"login": "psandovalsegura",
"id": 16195975,
"node_id": "MDQ6VXNlcjE2MTk1OTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/16195975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psandovalsegura",
"html_url": "https://github.com/psandovalsegura",
"followers_url": "https://api.github.com/users/psandovalsegura/followers",
"following_url": "https://api.github.com/users/psandovalsegura/following{/other_user}",
"gists_url": "https://api.github.com/users/psandovalsegura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psandovalsegura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psandovalsegura/subscriptions",
"organizations_url": "https://api.github.com/users/psandovalsegura/orgs",
"repos_url": "https://api.github.com/users/psandovalsegura/repos",
"events_url": "https://api.github.com/users/psandovalsegura/events{/privacy}",
"received_events_url": "https://api.github.com/users/psandovalsegura/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T16:03:50 | 2025-05-06T09:55:53 | 2025-05-06T09:55:53 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37836",
"html_url": "https://github.com/huggingface/transformers/pull/37836",
"diff_url": "https://github.com/huggingface/transformers/pull/37836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37836.patch",
"merged_at": "2025-05-06T09:55:53"
} | # What does this PR do?
Redefined the forward of `Dinov2WithRegistersForImageClassification` so that we properly access the patch tokens and skip over registers.
Fixes #37817
Additional details:
- Change was made to `src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py` then I ran `python utils/modular_model_converter.py --files_to_parse src/transformers/models/dinov2_with_registers/modular_dinov2_with_registers.py`.
- Using a breakpoint at `Dinov2WithRegistersForImageClassification.forward` I confirmed that given the following input:
```
import torch
from transformers import AutoModelForImageClassification
p = torch.randn(1, 3, 224, 224)
m = AutoModelForImageClassification.from_pretrained('facebook/dinov2-with-registers-small-imagenet1k-1-layer')
o = m(p)
```
produces the expected shapes for the following tensors:
```
sequence_output.shape=torch.Size([1, 261, 384])
cls_token.shape=torch.Size([1, 384])
patch_tokens.shape=torch.Size([1, 256, 384])
```
whereas previously, `patch_tokens.shape` would be `torch.Size([1, 260, 384])` which is incorrect. We need to skip the 4 register tokens.
## Who can review?
@NielsRogge @qubvel
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37836/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37835/comments | https://api.github.com/repos/huggingface/transformers/issues/37835/events | https://github.com/huggingface/transformers/issues/37835 | 3,025,348,585 | I_kwDOCUB6oc60Uyfp | 37,835 | Add HindiCausalLM: A specialized Hindi language model (~102M parameters) | {
"login": "NandhaKishorM",
"id": 48623612,
"node_id": "MDQ6VXNlcjQ4NjIzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48623612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NandhaKishorM",
"html_url": "https://github.com/NandhaKishorM",
"followers_url": "https://api.github.com/users/NandhaKishorM/followers",
"following_url": "https://api.github.com/users/NandhaKishorM/following{/other_user}",
"gists_url": "https://api.github.com/users/NandhaKishorM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NandhaKishorM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NandhaKishorM/subscriptions",
"organizations_url": "https://api.github.com/users/NandhaKishorM/orgs",
"repos_url": "https://api.github.com/users/NandhaKishorM/repos",
"events_url": "https://api.github.com/users/NandhaKishorM/events{/privacy}",
"received_events_url": "https://api.github.com/users/NandhaKishorM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-04-28T15:30:32 | 2025-04-28T16:25:53 | null | NONE | null | null | null | null | ### Model description
HindiCausalLM model, a causal language model specifically designed for Hindi text generation. The model was developed by ConvaiInnovations and the original implementation is available on the [Hugging Face Hub](https://huggingface.co/convaiinnovations/hindi-causal-lm).
HindiCausalLM addresses the need for specialized language models for the Hindi language, which is one of the most widely spoken languages in the world with over 600 million speakers. The model provides improved performance on Hindi text generation tasks compared to multilingual models of similar size.
## Model Architecture and Technical Specifications
HindiCausalLM is based on a decoder-only transformer architecture with the following specifications:
- **Base Architecture**: Transformer decoder-only, similar to LLaMA but with architecture modifications
- **Size**: 12 layers, 768 hidden dimensions, ~102M parameters
- **Attention Mechanism**: Implements Grouped Query Attention (GQA) with 16 attention heads and 4 key-value heads for efficient inference
- **Feed-Forward Network**: Uses intermediate size of 3072 with SiLU activation functions
- **Normalization**: Layer normalization applied before attention and feed-forward blocks (pre-normalization)
- **Vocabulary Size**: 16,000 tokens using a SentencePiece tokenizer optimized for Hindi text
- **Context Window**: Supports sequences up to 512 tokens in length
## Usage
You can use this model with the following code after cloning [Hugging Face Hub](https://huggingface.co/convaiinnovations/hindi-causal-lm)
```python
import torch
from hindi_embeddings import SentencePieceTokenizerWrapper
from convaicausallm_model import ConvaiCausalLM, ConvaiCausalLMConfig
from safetensors.torch import load_file
import os
class HindiLLMGenerator:
def __init__(self, model_path, device=None):
# Set device
if device is None:
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
self.device = torch.device(device)
print(f"Using device: {self.device}")
# Load tokenizer
tokenizer_path = os.path.join(model_path, "tokenizer.model")
self.tokenizer = SentencePieceTokenizerWrapper(tokenizer_path)
# Load model config
config_path = os.path.join(model_path, "config.json")
import json
with open(config_path, 'r') as f:
config_dict = json.load(f)
self.config = ConvaiCausalLMConfig(**config_dict)
# Load model - try safetensors first, fall back to PyTorch bin if needed
safetensors_path = os.path.join(model_path, "model.safetensors")
pytorch_path = os.path.join(model_path, "pytorch_model.bin")
self.model = ConvaiCausalLM(self.config)
# Check which format is available and load accordingly
if os.path.exists(safetensors_path):
print(f"Loading model from SafeTensors")
state_dict = load_file(safetensors_path, device="cpu")
self.model.load_state_dict(state_dict)
elif os.path.exists(pytorch_path):
print(f"Loading model from PyTorch bin")
self.model.load_state_dict(torch.load(pytorch_path, map_location="cpu"))
# Move model to device and set to evaluation mode
self.model.to(self.device)
self.model.eval()
def generate(self, prompt, max_length=100, temperature=0.8, top_k=50, top_p=0.9,
repetition_penalty=1.1, do_sample=True):
# Tokenize the prompt
input_ids = self.tokenizer.sp_model.EncodeAsIds(prompt)
input_tensor = torch.tensor([input_ids], dtype=torch.long).to(self.device)
# Start with the input tensor
output_sequence = input_tensor.clone()
# Generate tokens one by one
for _ in range(max_length - len(input_ids)):
with torch.no_grad():
# Get the model's output for the current sequence
outputs = self.model(output_sequence)
next_token_logits = outputs[0, -1, :]
# Apply temperature
if temperature > 0:
next_token_logits = next_token_logits / temperature
# Apply repetition penalty
if repetition_penalty > 1.0:
for token_id in output_sequence[0].tolist():
next_token_logits[token_id] /= repetition_penalty
# Filter with top-k sampling
if top_k > 0:
top_k_values, top_k_indices = torch.topk(next_token_logits, top_k)
next_token_logits = torch.full_like(next_token_logits, float('-inf'))
next_token_logits.scatter_(0, top_k_indices, top_k_values)
# Filter with top-p/nucleus sampling
if top_p < 1.0 and do_sample:
sorted_logits, sorted_indices = torch.sort(next_token_logits, descending=True)
cumulative_probs = torch.cumsum(torch.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probs > top_p
# Shift the indices to the right to keep the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
indices_to_remove = sorted_indices[sorted_indices_to_remove]
next_token_logits[indices_to_remove] = float('-inf')
# Sample or choose the next token
if do_sample:
probs = torch.softmax(next_token_logits, dim=-1)
next_token = torch.multinomial(probs, num_samples=1)
else:
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(0)
# Add the next token to the sequence
output_sequence = torch.cat([output_sequence, next_token.unsqueeze(0)], dim=1)
# Check if we've generated an end token
if next_token.item() == self.tokenizer.eos_token_id:
break
# Decode the generated sequence
generated_ids = output_sequence[0].tolist()
generated_text = self.tokenizer.sp_model.DecodeIds(generated_ids)
return generated_text
# Example usage
if __name__ == "__main__":
generator = HindiLLMGenerator("path/to/model")
result = generator.generate("भारत एक विशाल देश है")
print(result) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37835/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/37834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37834/comments | https://api.github.com/repos/huggingface/transformers/issues/37834/events | https://github.com/huggingface/transformers/pull/37834 | 3,025,275,002 | PR_kwDOCUB6oc6UL33W | 37,834 | Update modular_qwen3_moe.py | {
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T15:11:14 | 2025-04-30T14:48:07 | 2025-04-30T12:28:08 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37834",
"html_url": "https://github.com/huggingface/transformers/pull/37834",
"diff_url": "https://github.com/huggingface/transformers/pull/37834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37834.patch",
"merged_at": null
} | # What does this PR do?
Fixes #37813,
This PR removed redundant initializations of self.self_attn and self.mlp in Qwen3MoeDecoderLayer in modular_qwen3_moe.py.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37834/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37833/comments | https://api.github.com/repos/huggingface/transformers/issues/37833/events | https://github.com/huggingface/transformers/pull/37833 | 3,025,230,327 | PR_kwDOCUB6oc6ULt8k | 37,833 | Fix Dinov2 With Registers patch tokens in Image Classification | {
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T14:58:37 | 2025-04-30T07:08:00 | 2025-04-30T01:43:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37833",
"html_url": "https://github.com/huggingface/transformers/pull/37833",
"diff_url": "https://github.com/huggingface/transformers/pull/37833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37833.patch",
"merged_at": null
} | # What does this PR do?
Fixes #37817
Thanks @psandovalsegura for the Fix 🤗
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "yaswanth19",
"id": 82788246,
"node_id": "MDQ6VXNlcjgyNzg4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/82788246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaswanth19",
"html_url": "https://github.com/yaswanth19",
"followers_url": "https://api.github.com/users/yaswanth19/followers",
"following_url": "https://api.github.com/users/yaswanth19/following{/other_user}",
"gists_url": "https://api.github.com/users/yaswanth19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaswanth19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaswanth19/subscriptions",
"organizations_url": "https://api.github.com/users/yaswanth19/orgs",
"repos_url": "https://api.github.com/users/yaswanth19/repos",
"events_url": "https://api.github.com/users/yaswanth19/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaswanth19/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37833/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37832/comments | https://api.github.com/repos/huggingface/transformers/issues/37832/events | https://github.com/huggingface/transformers/pull/37832 | 3,025,176,745 | PR_kwDOCUB6oc6ULiRm | 37,832 | Add dia model | {
"login": "buttercrab",
"id": 34997549,
"node_id": "MDQ6VXNlcjM0OTk3NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34997549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buttercrab",
"html_url": "https://github.com/buttercrab",
"followers_url": "https://api.github.com/users/buttercrab/followers",
"following_url": "https://api.github.com/users/buttercrab/following{/other_user}",
"gists_url": "https://api.github.com/users/buttercrab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buttercrab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buttercrab/subscriptions",
"organizations_url": "https://api.github.com/users/buttercrab/orgs",
"repos_url": "https://api.github.com/users/buttercrab/repos",
"events_url": "https://api.github.com/users/buttercrab/events{/privacy}",
"received_events_url": "https://api.github.com/users/buttercrab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T14:42:07 | 2025-05-27T14:39:51 | 2025-05-27T14:39:50 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37832",
"html_url": "https://github.com/huggingface/transformers/pull/37832",
"diff_url": "https://github.com/huggingface/transformers/pull/37832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37832.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds [Dia](https://github.com/nari-labs/dia) model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "buttercrab",
"id": 34997549,
"node_id": "MDQ6VXNlcjM0OTk3NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/34997549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/buttercrab",
"html_url": "https://github.com/buttercrab",
"followers_url": "https://api.github.com/users/buttercrab/followers",
"following_url": "https://api.github.com/users/buttercrab/following{/other_user}",
"gists_url": "https://api.github.com/users/buttercrab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/buttercrab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/buttercrab/subscriptions",
"organizations_url": "https://api.github.com/users/buttercrab/orgs",
"repos_url": "https://api.github.com/users/buttercrab/repos",
"events_url": "https://api.github.com/users/buttercrab/events{/privacy}",
"received_events_url": "https://api.github.com/users/buttercrab/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37832/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37831/comments | https://api.github.com/repos/huggingface/transformers/issues/37831/events | https://github.com/huggingface/transformers/pull/37831 | 3,025,059,773 | PR_kwDOCUB6oc6ULI5S | 37,831 | remove duplicate self_attn setup for qwen3 moe | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T14:05:30 | 2025-06-02T09:27:12 | 2025-06-02T09:27:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37831",
"html_url": "https://github.com/huggingface/transformers/pull/37831",
"diff_url": "https://github.com/huggingface/transformers/pull/37831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37831.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37831/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37830/comments | https://api.github.com/repos/huggingface/transformers/issues/37830/events | https://github.com/huggingface/transformers/pull/37830 | 3,024,855,488 | PR_kwDOCUB6oc6UKdTK | 37,830 | Fixed a bug calculating cross entropy loss in `JetMoeForCausalLM` | {
"login": "Phoenix-Shen",
"id": 56379418,
"node_id": "MDQ6VXNlcjU2Mzc5NDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/56379418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Phoenix-Shen",
"html_url": "https://github.com/Phoenix-Shen",
"followers_url": "https://api.github.com/users/Phoenix-Shen/followers",
"following_url": "https://api.github.com/users/Phoenix-Shen/following{/other_user}",
"gists_url": "https://api.github.com/users/Phoenix-Shen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Phoenix-Shen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Phoenix-Shen/subscriptions",
"organizations_url": "https://api.github.com/users/Phoenix-Shen/orgs",
"repos_url": "https://api.github.com/users/Phoenix-Shen/repos",
"events_url": "https://api.github.com/users/Phoenix-Shen/events{/privacy}",
"received_events_url": "https://api.github.com/users/Phoenix-Shen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T12:58:28 | 2025-07-16T09:22:01 | 2025-07-16T09:22:01 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37830",
"html_url": "https://github.com/huggingface/transformers/pull/37830",
"diff_url": "https://github.com/huggingface/transformers/pull/37830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37830.patch",
"merged_at": "2025-07-16T09:22:00"
} | In the original code, we shift the logits and pass `shift_logits` into the `self.loss_function`, but in `self.loss_function`, the `shift_logits` will be shifted again, so we are actually doing **next next token prediction**, which is incorrect. I have removed the logits shifting before calling `self.loss_function`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixed incorrect cross entropy calculation in `JetMoeForCausalLM`.
The original code snippet of the model's forward function is:
```python
if labels is not None:
logits = logits.float()
# NOTICE: The labels are shifted once here
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
shift_logits = shift_logits.view(-1, self.config.vocab_size)
shift_labels = shift_labels.view(-1)
shift_labels = shift_labels.to(shift_logits.device)
loss = self.loss_function(
shift_logits,
shift_labels,
vocab_size=self.config.vocab_size,
**kwargs,
)
```
but in `self.loss_function`, typically `ForCausalLMLoss`:
```python
def ForCausalLMLoss(
logits,
labels,
vocab_size: int,
num_items_in_batch: Optional[int] = None,
ignore_index: int = -100,
shift_labels: Optional[torch.Tensor] = None,
**kwargs,
) -> torch.Tensor:
logits = logits.float()
# NOTICE: In the above call, the shift_labels is None (we pass shift_labels as labels, not shift_labels), so the labels will be shifted again!
if shift_labels is None:
labels = nn.functional.pad(labels, (0, 1), value=ignore_index)
shift_labels = labels[..., 1:].contiguous()
logits = logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
shift_labels = shift_labels.to(logits.device)
loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
return loss
```
So we are doing **next next token prediction**, which is incorrect. We need to remove the first label shifting operation to fix this bug, that's what I do in this PR. With the modified code snippet, I can get reasonable results in SFT tasks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37830/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37829/comments | https://api.github.com/repos/huggingface/transformers/issues/37829/events | https://github.com/huggingface/transformers/pull/37829 | 3,024,559,501 | PR_kwDOCUB6oc6UJcxV | 37,829 | [modular] Fix the prefix-based renaming if the old and new model share a common name suffix | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T10:58:46 | 2025-04-30T10:19:46 | 2025-04-29T08:43:23 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37829",
"html_url": "https://github.com/huggingface/transformers/pull/37829",
"diff_url": "https://github.com/huggingface/transformers/pull/37829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37829.patch",
"merged_at": "2025-04-29T08:43:23"
} | # What does this PR do?
As per the title. See https://github.com/huggingface/transformers/pull/36895#issuecomment-2815598690 for details. It's mostly an issue for all the `detr` models, as they share the suffix in the base name.
cc @qubvel for viz!
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37829/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37829/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37828/comments | https://api.github.com/repos/huggingface/transformers/issues/37828/events | https://github.com/huggingface/transformers/pull/37828 | 3,024,359,522 | PR_kwDOCUB6oc6UIwpa | 37,828 | Enhance documentation to explain chat-based few-shot prompting | {
"login": "MostHumble",
"id": 56939432,
"node_id": "MDQ6VXNlcjU2OTM5NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/56939432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MostHumble",
"html_url": "https://github.com/MostHumble",
"followers_url": "https://api.github.com/users/MostHumble/followers",
"following_url": "https://api.github.com/users/MostHumble/following{/other_user}",
"gists_url": "https://api.github.com/users/MostHumble/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MostHumble/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MostHumble/subscriptions",
"organizations_url": "https://api.github.com/users/MostHumble/orgs",
"repos_url": "https://api.github.com/users/MostHumble/repos",
"events_url": "https://api.github.com/users/MostHumble/events{/privacy}",
"received_events_url": "https://api.github.com/users/MostHumble/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T09:46:01 | 2025-04-30T18:00:10 | 2025-04-30T18:00:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37828",
"html_url": "https://github.com/huggingface/transformers/pull/37828",
"diff_url": "https://github.com/huggingface/transformers/pull/37828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37828.patch",
"merged_at": "2025-04-30T18:00:10"
} | Updates the documentation on few-shot prompting to illustrate how to structure examples using the chat-based format for instruction-tuned models.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37828/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37827/comments | https://api.github.com/repos/huggingface/transformers/issues/37827/events | https://github.com/huggingface/transformers/pull/37827 | 3,024,343,307 | PR_kwDOCUB6oc6UItIy | 37,827 | Add HindiCausalLM: A specialized Hindi language model with grouped query attention (~102M parameters) | {
"login": "NandhaKishorM",
"id": 48623612,
"node_id": "MDQ6VXNlcjQ4NjIzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48623612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NandhaKishorM",
"html_url": "https://github.com/NandhaKishorM",
"followers_url": "https://api.github.com/users/NandhaKishorM/followers",
"following_url": "https://api.github.com/users/NandhaKishorM/following{/other_user}",
"gists_url": "https://api.github.com/users/NandhaKishorM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NandhaKishorM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NandhaKishorM/subscriptions",
"organizations_url": "https://api.github.com/users/NandhaKishorM/orgs",
"repos_url": "https://api.github.com/users/NandhaKishorM/repos",
"events_url": "https://api.github.com/users/NandhaKishorM/events{/privacy}",
"received_events_url": "https://api.github.com/users/NandhaKishorM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T09:39:38 | 2025-04-28T17:37:46 | 2025-04-28T17:37:46 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37827",
"html_url": "https://github.com/huggingface/transformers/pull/37827",
"diff_url": "https://github.com/huggingface/transformers/pull/37827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37827.patch",
"merged_at": null
} | # What does this PR do?
This PR adds complete support for the HindiCausalLM model, a causal language model specifically designed for Hindi text generation. The model was developed by ConvaiInnovations and the original implementation is available on the [Hugging Face Hub](https://huggingface.co/convaiinnovations/hindi-causal-lm).
HindiCausalLM addresses the need for specialized language models for the Hindi language, which is one of the most widely spoken languages in the world with over 600 million speakers. The model provides improved performance on Hindi text generation tasks compared to multilingual models of similar size.
## Model Architecture and Technical Specifications
HindiCausalLM is based on a decoder-only transformer architecture with the following specifications:
- **Base Architecture**: Transformer decoder-only, similar to LLaMA but with architecture modifications
- **Size**: 12 layers, 768 hidden dimensions, ~102M parameters
- **Attention Mechanism**: Implements Grouped Query Attention (GQA) with 16 attention heads and 4 key-value heads for efficient inference
- **Feed-Forward Network**: Uses intermediate size of 3072 with SiLU activation functions
- **Normalization**: Layer normalization applied before attention and feed-forward blocks (pre-normalization)
- **Vocabulary Size**: 16,000 tokens using a SentencePiece tokenizer optimized for Hindi text
- **Context Window**: Supports sequences up to 512 tokens in length
## Implementation Details
This PR includes:
1. **Model Implementation**:
- PyTorch implementation with three main model classes:
- `HindiCausalLMModel`: Base model outputting hidden states
- `HindiCausalLMForCausalLM`: Model with a language modeling head for text generation
- `HindiCausalLMForSequenceClassification`: Model with a classification head for sequence classification tasks
- TensorFlow implementation with equivalent classes for cross-framework compatibility
- Full support for features like model parallelism, gradient checkpointing, and mixed precision
2. **Tokenizer Implementation**:
- `HindiCausalLMTokenizer`: SentencePiece-based slow tokenizer
- `HindiCausalLMTokenizerFast`: Fast tokenizer implementation using 🤗 Tokenizers
- Special token handling optimized for Hindi text
3. **Generation Support**:
- Custom generation configuration optimized for Hindi text generation
- Support for standard generation parameters (temperature, top-k, top-p, etc.)
- Efficient beam search and sampling implementations
4. **Conversion Utilities**:
- `convert_hindicausallm_original_pytorch_to_hf.py`: Script for converting original checkpoints to the Hugging Face format
- Comprehensive weight mapping to ensure compatibility
5. **Auto Class Integration**:
- Added to all auto classes: `AutoModel`, `AutoModelForCausalLM`, `AutoModelForSequenceClassification`, `AutoTokenizer`, etc.
- Properly registered with model types and mappings
## Files Added or Modified
New files:
```
src/transformers/models/hindicausallm/
├── __init__.py
├── configuration_hindicausallm.py
├── convert_hindicausallm_original_pytorch_to_hf.py
├── generation_config_hindicausallm.py
├── modeling_hindicausallm.py
├── modeling_tf_hindicausallm.py
├── README.md
├── tokenization_hindicausallm.py
└── tokenization_hindicausallm_fast.py
tests/models/hindicausallm/
├── __init__.py
├── test_modeling_hindicausallm.py
├── test_tokenization_hindicausallm.py
└── test_docstring_hindicausallm.py
docs/source/en/model_doc/hindicausallm.md
```
Modified files:
```
src/transformers/models/__init__.py
src/transformers/models/auto/configuration_auto.py
src/transformers/models/auto/modeling_auto.py
src/transformers/models/auto/tokenization_auto.py
src/transformers/models/auto/tokenization_auto_fast.py
docs/source/en/model_summary.md
```
## Testing
The implementation includes comprehensive tests:
1. **Model Tests**:
- Basic functionality tests for all model variants
- Forward pass testing with various input configurations
- Integration with generation pipelines
- Weight loading and saving tests
- Gradient flow verification
2. **Tokenizer Tests**:
- Tokenization and detokenization verification with Hindi text
- Special token handling tests
- Fast and slow tokenizer compatibility tests
- Serialization and deserialization tests
3. **Documentation Tests**:
- Docstring example verification
- Usage pattern validation
4. **Integration Tests**:
- Model loading from Hub
- Text generation verification with real Hindi text
- Model/tokenizer integration tests
All tests pass on both CPU and GPU, with and without mixed precision.
## Documentation
This PR includes comprehensive documentation:
1. **Model Card**:
- Detailed model card in `docs/source/en/model_doc/hindicausallm.md`
- Architecture description
- Usage examples
- Performance characteristics
2. **API Documentation**:
- Well-documented API for all classes
- Parameter descriptions and type annotations
- Example usage in docstrings
3. **README**:
- Internal README.md with implementation details
- Links to original model and paper
## Usage Examples
### Basic Usage for Text Generation
```python
from transformers import HindiCausalLMForCausalLM, HindiCausalLMTokenizer
# Load model and tokenizer
model = HindiCausalLMForCausalLM.from_pretrained("convaiinnovations/hindi-causal-lm")
tokenizer = HindiCausalLMTokenizer.from_pretrained("convaiinnovations/hindi-causal-lm")
# Generate text
input_text = "भारत एक विशाल देश है" # "India is a vast country"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate
outputs = model.generate(
input_ids,
max_length=50,
num_return_sequences=1,
temperature=0.7,
top_p=0.9,
do_sample=True
)
# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
### Classification Example
```python
from transformers import HindiCausalLMForSequenceClassification, HindiCausalLMTokenizer
import torch
# Load model and tokenizer
model = HindiCausalLMForSequenceClassification.from_pretrained(
"convaiinnovations/hindi-causal-lm", num_labels=3
)
tokenizer = HindiCausalLMTokenizer.from_pretrained("convaiinnovations/hindi-causal-lm")
# Example texts
texts = ["यह एक सकारात्मक समीक्षा है", "यह एक नकारात्मक समीक्षा है", "यह एक तटस्थ समीक्षा है"]
inputs = tokenizer(texts, padding=True, return_tensors="pt")
# Forward pass
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)
print(f"Predictions: {predictions}")
```
### TensorFlow Example
```python
from transformers import TFHindiCausalLMForCausalLM, HindiCausalLMTokenizer
import tensorflow as tf
# Load model and tokenizer
model = TFHindiCausalLMForCausalLM.from_pretrained("convaiinnovations/hindi-causal-lm")
tokenizer = HindiCausalLMTokenizer.from_pretrained("convaiinnovations/hindi-causal-lm")
# Process input
input_text = "हिंदी भाषा बहुत समृद्ध है" # "Hindi language is very rich"
inputs = tokenizer(input_text, return_tensors="tf")
# Generate
generated_ids = model.generate(inputs.input_ids, max_length=30)
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
print(generated_text)
```
## Model Performance
The HindiCausalLM model demonstrates strong performance on Hindi text generation tasks. Based on evaluations, it shows significant improvements over generic multilingual models of similar size on Hindi-specific tasks:
- **Text Generation**: Produces fluent and coherent Hindi text
- **Hindi Understanding**: Demonstrates good understanding of Hindi grammar and syntax
- **Context Handling**: Maintains context effectively through longer generated sequences
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? This model is publicly available on the Hugging Face Hub and meets the criteria for inclusion in the Transformers library.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? Yes, comprehensive tests have been added for model and tokenizer functionality.
## Who can review?
@ArthurZucker - For text model implementation review
@Rocketknight1 - For TensorFlow implementation review | {
"login": "NandhaKishorM",
"id": 48623612,
"node_id": "MDQ6VXNlcjQ4NjIzNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/48623612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NandhaKishorM",
"html_url": "https://github.com/NandhaKishorM",
"followers_url": "https://api.github.com/users/NandhaKishorM/followers",
"following_url": "https://api.github.com/users/NandhaKishorM/following{/other_user}",
"gists_url": "https://api.github.com/users/NandhaKishorM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NandhaKishorM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NandhaKishorM/subscriptions",
"organizations_url": "https://api.github.com/users/NandhaKishorM/orgs",
"repos_url": "https://api.github.com/users/NandhaKishorM/repos",
"events_url": "https://api.github.com/users/NandhaKishorM/events{/privacy}",
"received_events_url": "https://api.github.com/users/NandhaKishorM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37827/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37826/comments | https://api.github.com/repos/huggingface/transformers/issues/37826/events | https://github.com/huggingface/transformers/pull/37826 | 3,024,337,265 | PR_kwDOCUB6oc6UIr1F | 37,826 | Update modeling_qwen3_moe.py | {
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T09:37:09 | 2025-04-29T14:56:14 | 2025-04-29T14:56:14 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37826",
"html_url": "https://github.com/huggingface/transformers/pull/37826",
"diff_url": "https://github.com/huggingface/transformers/pull/37826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37826.patch",
"merged_at": null
} | # What does this PR do?
Fixes #37813,
This PR removed redundant initializations of `self.self_attn` and `self.mlp` in `Qwen3MoeDecoderLayer`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1 @ArthurZucker
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37826/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37825/comments | https://api.github.com/repos/huggingface/transformers/issues/37825/events | https://github.com/huggingface/transformers/pull/37825 | 3,024,285,073 | PR_kwDOCUB6oc6UIgVZ | 37,825 | Fix check of unecessary packages (issue #37626) | {
"login": "HichTala",
"id": 98521878,
"node_id": "U_kgDOBd9TFg",
"avatar_url": "https://avatars.githubusercontent.com/u/98521878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HichTala",
"html_url": "https://github.com/HichTala",
"followers_url": "https://api.github.com/users/HichTala/followers",
"following_url": "https://api.github.com/users/HichTala/following{/other_user}",
"gists_url": "https://api.github.com/users/HichTala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HichTala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HichTala/subscriptions",
"organizations_url": "https://api.github.com/users/HichTala/orgs",
"repos_url": "https://api.github.com/users/HichTala/repos",
"events_url": "https://api.github.com/users/HichTala/events{/privacy}",
"received_events_url": "https://api.github.com/users/HichTala/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-28T09:19:16 | 2025-04-29T13:44:43 | 2025-04-29T13:21:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/37825",
"html_url": "https://github.com/huggingface/transformers/pull/37825",
"diff_url": "https://github.com/huggingface/transformers/pull/37825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/37825.patch",
"merged_at": "2025-04-29T13:21:05"
} | # What does this PR do?
This pull request updates the `get_imports` function in `src/transformers/dynamic_module_utils.py` to improve how conditional imports are handled. The most important change introduces a check for functions in `transformers.utils.import_utils` to ensure imports in specific conditional blocks are ignored.
### Enhancements to conditional import handling:
* [`src/transformers/dynamic_module_utils.py`](diffhunk://#diff-955cd5ce55aed4805a0875320c92ae20f0e80d86f010bb30e0ef267376c75d1cR154-R168): Added a reference to `transformers.utils` and updated the logic in the `recursive_look_for_imports` function to ignore imports in conditional blocks that use functions from `transformers.utils.import_utils`, in addition to the existing check for `is_flash_attn` functions.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
- #37626
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37825/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/37824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37824/comments | https://api.github.com/repos/huggingface/transformers/issues/37824/events | https://github.com/huggingface/transformers/issues/37824 | 3,024,265,982 | I_kwDOCUB6oc60QqL- | 37,824 | Support for B200 (`sm_100` with `pytorch>=2.7.0`) | {
"login": "dominiquegarmier",
"id": 42445422,
"node_id": "MDQ6VXNlcjQyNDQ1NDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/42445422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dominiquegarmier",
"html_url": "https://github.com/dominiquegarmier",
"followers_url": "https://api.github.com/users/dominiquegarmier/followers",
"following_url": "https://api.github.com/users/dominiquegarmier/following{/other_user}",
"gists_url": "https://api.github.com/users/dominiquegarmier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dominiquegarmier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dominiquegarmier/subscriptions",
"organizations_url": "https://api.github.com/users/dominiquegarmier/orgs",
"repos_url": "https://api.github.com/users/dominiquegarmier/repos",
"events_url": "https://api.github.com/users/dominiquegarmier/events{/privacy}",
"received_events_url": "https://api.github.com/users/dominiquegarmier/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-04-28T09:11:40 | 2025-06-12T14:34:15 | 2025-06-12T14:34:15 | NONE | null | null | null | null | Is there already an open issue addressing/discussing https://github.com/huggingface/transformers/pull/37760 (I could not find any)? Looks like there is an issue open on the pytorch repo https://github.com/pytorch/pytorch/issues/152275
Feel free to close if this is a duplicate.
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37824/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/37823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/37823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/37823/comments | https://api.github.com/repos/huggingface/transformers/issues/37823/events | https://github.com/huggingface/transformers/issues/37823 | 3,024,134,194 | I_kwDOCUB6oc60QKAy | 37,823 | Decoder Attention Mask is not passed to the VisionEncoderDecoderModel during training!! | {
"login": "AhmadM-DL",
"id": 52525688,
"node_id": "MDQ6VXNlcjUyNTI1Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/52525688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AhmadM-DL",
"html_url": "https://github.com/AhmadM-DL",
"followers_url": "https://api.github.com/users/AhmadM-DL/followers",
"following_url": "https://api.github.com/users/AhmadM-DL/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmadM-DL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AhmadM-DL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmadM-DL/subscriptions",
"organizations_url": "https://api.github.com/users/AhmadM-DL/orgs",
"repos_url": "https://api.github.com/users/AhmadM-DL/repos",
"events_url": "https://api.github.com/users/AhmadM-DL/events{/privacy}",
"received_events_url": "https://api.github.com/users/AhmadM-DL/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-04-28T08:25:54 | 2025-06-06T08:03:05 | 2025-06-06T08:03:05 | NONE | null | null | null | null | ### System Info
latest transformers
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am training a VisionEncoderDecoderModel.
I noticed that while the modeling code shifts the "labels" to right to generate "decoder_input_ids".
It doesn't generate the "decoder_attnetion_mask" by default.
Here a snippet from the `vision_encoder_decoder_modeling` file:
https://github.com/huggingface/transformers/blob/816b37010cb6fd963433c6c5681b18be6475592e/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L612
### Expected behavior
The default behavior should be to generate the mask on the go.
Aside form this how we can pass decoder attention mask?
I am using the Seq2Seq trainer. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/37823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/37823/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.