url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/39837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39837/comments | https://api.github.com/repos/huggingface/transformers/issues/39837/events | https://github.com/huggingface/transformers/pull/39837 | 3,282,425,868 | PR_kwDOCUB6oc6hpAVr | 39,837 | add step3v in VLMS | {
"login": "yhyang201",
"id": 47235274,
"node_id": "MDQ6VXNlcjQ3MjM1Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47235274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhyang201",
"html_url": "https://github.com/yhyang201",
"followers_url": "https://api.github.com/users/yhyang201/followers",
"following_url": "https://api.github.com/users/yhyang201/following{/other_user}",
"gists_url": "https://api.github.com/users/yhyang201/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhyang201/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhyang201/subscriptions",
"organizations_url": "https://api.github.com/users/yhyang201/orgs",
"repos_url": "https://api.github.com/users/yhyang201/repos",
"events_url": "https://api.github.com/users/yhyang201/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhyang201/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-08-01T03:34:16 | 2025-08-01T08:14:08 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39837",
"html_url": "https://github.com/huggingface/transformers/pull/39837",
"diff_url": "https://github.com/huggingface/transformers/pull/39837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39837.patch",
"merged_at": null
} | # What does this PR do?
I would like to kindly request the addition of a new VLM entry, `"step3v"`, at the following location:
[https://github.com/huggingface/transformers/blob/2c0af41ce5c448f872f3222a75f56030fb2e5a88/src/transformers/modeling\_utils.py#L233](https://github.com/huggingface/transformers/blob/2c0af41ce5c448f872f3222a75f56030fb2e5a88/src/transformers/modeling_utils.py#L233)
The reason for this request is that `step3v` requires the use of `_checkpoint_conversion_mapping` during model loading. However, as per the current implementation at:
[https://github.com/huggingface/transformers/blob/2c0af41ce5c448f872f3222a75f56030fb2e5a88/src/transformers/modeling\_utils.py#L4578](https://github.com/huggingface/transformers/blob/2c0af41ce5c448f872f3222a75f56030fb2e5a88/src/transformers/modeling_utils.py#L4578)
only models registered in `VLMS` are allowed to access `_checkpoint_conversion_mapping`.
By including `step3v` in the `VLMS` list, it will be possible to remove the current workaround in the [[example code](https://huggingface.co/stepfun-ai/step3/blob/main/README.md?code=true#L344)](https://huggingface.co/stepfun-ai/step3/blob/main/README.md?code=true#L344), where key mappings are manually passed in. This change would help improve the readability and overall clarity of the example.
Thank you very much for your time and consideration!
@amyeroberts, @qubvel
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39837/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39836/comments | https://api.github.com/repos/huggingface/transformers/issues/39836/events | https://github.com/huggingface/transformers/pull/39836 | 3,282,226,393 | PR_kwDOCUB6oc6hoUup | 39,836 | Support input_embeds in torch exportable decoders | {
"login": "jackzhxng",
"id": 32371937,
"node_id": "MDQ6VXNlcjMyMzcxOTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/32371937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackzhxng",
"html_url": "https://github.com/jackzhxng",
"followers_url": "https://api.github.com/users/jackzhxng/followers",
"following_url": "https://api.github.com/users/jackzhxng/following{/other_user}",
"gists_url": "https://api.github.com/users/jackzhxng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackzhxng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackzhxng/subscriptions",
"organizations_url": "https://api.github.com/users/jackzhxng/orgs",
"repos_url": "https://api.github.com/users/jackzhxng/repos",
"events_url": "https://api.github.com/users/jackzhxng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackzhxng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-08-01T01:50:36 | 2025-08-18T22:36:32 | 2025-08-07T08:51:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39836",
"html_url": "https://github.com/huggingface/transformers/pull/39836",
"diff_url": "https://github.com/huggingface/transformers/pull/39836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39836.patch",
"merged_at": "2025-08-07T08:51:32"
} | # What does this PR do?
Allows specifying `inputs_embeds` in `TorchExportableModule`'s in order to support export of multimodal model's text decoders.
Adds `config` and `generation_config` to the constructors to support multimodal models since the `TorchExportableModule` will wrap the nested text decoder model, which doesn't have its config and generation config as attributes.
e.g. for exporting Voxtral's text decoder we need to:
```
voxtral = AutoModel.FromPretrained( ... )
TorchExportableModuleForDecoderOnlyLM(
model=voxtral.language_model,
config=voxtral.config.text_config,
generation_config=voxtral.generation_config,
)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
@echarlaix @michaelbenayoun @zucchini-nlp | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39836/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39835/comments | https://api.github.com/repos/huggingface/transformers/issues/39835/events | https://github.com/huggingface/transformers/issues/39835 | 3,282,084,711 | I_kwDOCUB6oc7DoKNn | 39,835 | Crash when running Llama4 on transformers-4.54.1 | {
"login": "IKACE",
"id": 39850409,
"node_id": "MDQ6VXNlcjM5ODUwNDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39850409?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IKACE",
"html_url": "https://github.com/IKACE",
"followers_url": "https://api.github.com/users/IKACE/followers",
"following_url": "https://api.github.com/users/IKACE/following{/other_user}",
"gists_url": "https://api.github.com/users/IKACE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IKACE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IKACE/subscriptions",
"organizations_url": "https://api.github.com/users/IKACE/orgs",
"repos_url": "https://api.github.com/users/IKACE/repos",
"events_url": "https://api.github.com/users/IKACE/events{/privacy}",
"received_events_url": "https://api.github.com/users/IKACE/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-08-01T00:32:36 | 2025-08-15T08:54:40 | 2025-08-15T08:54:40 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Linux-6.8.0-1032-aws-x86_64-with-glibc2.35
- Python version: 3.10.0
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.8.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: distributed
- Using GPU in script?: yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I encountered process crash when running Llama-4 with TP=8 on newest `transformers` library 4.54.1. It either gives direct `SIGTERM` from any rank of the GPUs or gives error `runtimeerror: aten.mul.tensor: got mixed torch.tensor and dtensor, need to convert all torch.tensor to dtensor before calling distributed operators!`. I have verified that fallback to `transformers` library version 4.53.3 works with no errors.
Please see the attached script I am running. You can run with command `torchrun --standalone --nproc-per-node 8 run_llama4.py`.
```
# run_llama4.py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# enable tensor parallelism
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-4-Scout-17B-16E-Instruct",
torch_dtype=torch.bfloat16,
tp_plan="auto"
)
# prepare input tokens
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-4-Scout-17B-16E-Instruct")
prompt = "The University of Washington is"
inputs = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
max_new_tokens = 50
generated = inputs
past_key_values = None
with torch.no_grad():
for step in range(max_new_tokens):
if step > 0:
inputs = next_token
outputs = model(input_ids=inputs, past_key_values=past_key_values, use_cache=True)
past_key_values = outputs.past_key_values # cache past key values for next iteration
logits = outputs.logits # shape: [batch, seq_len, vocab]
next_token_logits = logits[:, -1, :] # only use last token's logits
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(-1) # greedy decoding
generated = torch.cat([generated, next_token], dim=-1)
decoded = tokenizer.batch_decode(generated)
print("Generated text:", decoded[0])
```
### Expected behavior
No runtime error is encountered during execution. | {
"login": "3outeille",
"id": 47445085,
"node_id": "MDQ6VXNlcjQ3NDQ1MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/47445085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/3outeille",
"html_url": "https://github.com/3outeille",
"followers_url": "https://api.github.com/users/3outeille/followers",
"following_url": "https://api.github.com/users/3outeille/following{/other_user}",
"gists_url": "https://api.github.com/users/3outeille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/3outeille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/3outeille/subscriptions",
"organizations_url": "https://api.github.com/users/3outeille/orgs",
"repos_url": "https://api.github.com/users/3outeille/repos",
"events_url": "https://api.github.com/users/3outeille/events{/privacy}",
"received_events_url": "https://api.github.com/users/3outeille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39835/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39834/comments | https://api.github.com/repos/huggingface/transformers/issues/39834/events | https://github.com/huggingface/transformers/issues/39834 | 3,282,048,449 | I_kwDOCUB6oc7DoBXB | 39,834 | Allow extra outputs from `GenerationMixin.generate` | {
"login": "jood-canva",
"id": 206628664,
"node_id": "U_kgDODFDnOA",
"avatar_url": "https://avatars.githubusercontent.com/u/206628664?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jood-canva",
"html_url": "https://github.com/jood-canva",
"followers_url": "https://api.github.com/users/jood-canva/followers",
"following_url": "https://api.github.com/users/jood-canva/following{/other_user}",
"gists_url": "https://api.github.com/users/jood-canva/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jood-canva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jood-canva/subscriptions",
"organizations_url": "https://api.github.com/users/jood-canva/orgs",
"repos_url": "https://api.github.com/users/jood-canva/repos",
"events_url": "https://api.github.com/users/jood-canva/events{/privacy}",
"received_events_url": "https://api.github.com/users/jood-canva/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-08-01T00:10:46 | 2025-09-19T08:43:45 | null | NONE | null | null | null | null | ### Feature request
Hi all, first of all if this feature already exists I apologise!
With the rise of multimodal LLMs if would be great if we could add extra outputs to `GenerationMixin.generate` results. For instance if we implement a model like Janus from DeepSeek, there are two output heads. One `lm_head`, and one `image_head`. The outputs of `forward` method have extra attributes that can't be passed to the `generate` results.
I know these multimodal models are not common within this repo so this is pretty bleeding edge, but I'm working on research in this domain and it would be great if we could forward all model outputs to the `generate` result. Maybe through an attribute like `kwarg_outputs` in classes like `GenerateDecoderOnlyOutput`?
### Motivation
As far as I understand it's possible to feed the extra output during the autoregressive loop, through `prepare_inputs_for_generation` and `_update_model_kwargs_for_generation` where we can forward model outputs to the next forward call.
But when it comes to forward these outputs to the result of `generate`, it doesn't seem possible? I know the generation mixin is geared towards text generation, but it would be great to be able to forward extra model outputs
### Your contribution
Happy to have a try but not sure how big of a PR it would be, especially if it touches the pytorch / tf / flax implementations. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39834/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39833/comments | https://api.github.com/repos/huggingface/transformers/issues/39833/events | https://github.com/huggingface/transformers/issues/39833 | 3,281,815,763 | I_kwDOCUB6oc7DnIjT | 39,833 | Tool-Calling Model (ToolACE-2-Llama-3.1-8B) Responds with Irrelevant Tool message on General Question | {
"login": "dvn8weil",
"id": 190058927,
"node_id": "U_kgDOC1QRrw",
"avatar_url": "https://avatars.githubusercontent.com/u/190058927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvn8weil",
"html_url": "https://github.com/dvn8weil",
"followers_url": "https://api.github.com/users/dvn8weil/followers",
"following_url": "https://api.github.com/users/dvn8weil/following{/other_user}",
"gists_url": "https://api.github.com/users/dvn8weil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvn8weil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvn8weil/subscriptions",
"organizations_url": "https://api.github.com/users/dvn8weil/orgs",
"repos_url": "https://api.github.com/users/dvn8weil/repos",
"events_url": "https://api.github.com/users/dvn8weil/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvn8weil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T21:44:16 | 2025-08-01T15:35:58 | 2025-08-01T15:35:58 | NONE | null | null | null | null | I'm using the Team-ACE/ToolACE-2-Llama-3.1-8B model via vLLM's v1 chat completions endpoint, and encountering an issue with tool-calling behavior. When I provide a single tool function (like a weather function) and then ask a non-tool question, such as:
"Can you tell me about the Rust programming language?"
the model doesn't just ignore the tool—it responds as if it's required to use the function and outputs:
"The given question lacks the parameters required by the function. The available function is for getting the current weather, and there is no function related to the rust programming language."
This is unexpected behavior. The model should have instead fallen back to a regular assistant answer when the input clearly doesn’t require tool use.
### System Info
Running on MacOS , output of `transformers env` :
```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.53.2
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.11
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.0 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
vLLM command to setup the model :
```
export VLLM_CPU_KVCACHE_SPACE=16;
vllm serve Team-ACE/ToolACE-2-Llama-3.1-8B --dtype auto --enable-auto-tool-choice --tool-call-parser llama3_json --chat-template /path/to/tool-ace_chat_template.jinja
```
The chat-template file is directly taken from the model’s Hugging Face page (template provided by the model author). So this should not affect tool call behavior.
Link : https://huggingface.co/Team-ACE/ToolACE-2-Llama-3.1-8B?chat_template=default
#### Response with Tools in the request
The cURL command (with tools) :
```
curl --location 'http://0.0.0.0:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer EMPTY' \
--data '{
"model": "Team-ACE/ToolACE-2-Llama-3.1-8B",
"messages": [
{
"role": "user",
"content": "Can you tell me about the rust programming language"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit"
]
}
},
"required": [
"location"
]
}
}
}
],
"tool_choice": "auto"
}'
```
the response :
```
{
"id": "chatcmpl-a2cdb379ceb8404ab27e0a9f7059bac3",
"object": "chat.completion",
"created": 1753997702,
"model": "Team-ACE/ToolACE-2-Llama-3.1-8B",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"reasoning_content": null,
"content": "The given question lacks the parameters required by the function. The available function only provides current weather information and does not provide information about programming languages.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 246,
"total_tokens": 275,
"completion_tokens": 29,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"kv_transfer_params": null
}
```
#### Response without Tools in the request
The cURL command (without tools) :
```
curl --location 'http://0.0.0.0:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer EMPTY' \
--data '{
"model": "Team-ACE/ToolACE-2-Llama-3.1-8B",
"messages": [
{
"role": "user",
"content": "Can you tell me about the rust programming language"
}
]
}'
```
the response :
```
{
"id": "chatcmpl-925e4fd83c4440829a96f35cc75d9466",
"object": "chat.completion",
"created": 1753998782,
"model": "Team-ACE/ToolACE-2-Llama-3.1-8B",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"reasoning_content": null,
"content": "Rust is a systems programming language that focuses on safety, performance, and concurrency. Here are some key features and aspects of the Rust programming language:\n\n1. **Memory Safety**: Rust is designed to be memory-safe, which means it prevents common errors like null pointer dereferences, buffer overflows, and data races. This is achieved through a concept called ownership and borrowing, which ensure that resources are properly managed.\n\n2. **Performance**: Rust is designed to be fast and efficient. It uses a compile-time evaluation approach, which allows for performance-critical code to be optimized at compile time. This results in code that is both safe and fast.\n\n3. **Concurrency**: Rust has built-in support for concurrency through its concept of async/await and the `std::sync` module. This allows developers to write concurrent code that is both safe and efficient.\n\n4. **Compile-time Evaluation**: Rust's compile-time evaluation approach allows for the evaluation of expressions at compile time, which can result in more efficient code.\n\n5. **Error Handling**: Rust has a strong focus on error handling through its `Result` and `Error` types. This allows developers to handle errors in a robust and expressive way.\n\n6. **Stability and Maturity**: Rust is a relatively new language, but it has made significant progress in terms of stability and maturity. It is used in production by many companies, including Mozilla, Dropbox, and Microsoft.\n\n7. **Community**: Rust has a large and active community, with many resources available for learning and development. This includes the official Rust documentation, tutorials, and libraries.\n\n8. **Cross-Platform**: Rust is designed to be cross-platform, allowing developers to write code that can run on multiple operating systems and architectures.\n\n9. **Abstraction**: Rust provides a range of abstractions to help developers write efficient and safe code. These include the `Box` and `Rc` types for managing memory, and the `Mutex` and `RwLock` types for managing concurrency.\n\n10. **Tooling**: Rust has a range of tools available for development, including the `rustc` compiler, the `cargo` package manager, and the `rustfmt` formatter.\n\n### Example Use Case\n\nHere's an example of how Rust can be used to write a simple program:\n```rust\nfn main() {\n let x = 5;\n let y = 10;\n\n let sum = x + y;\n println!(\"The sum of {} and {} is {}\", x, y, sum);\n}\n```\nThis code defines a `main` function that calculates the sum of two numbers and prints the result to the console.\n\n### Resources\n\n- [Rust Official Website](https://www.rust-lang.org/)\n- [Rust Documentation](https://doc.rust-lang.org/)\n- [Rust by Example](https://doc.rust-lang.org/rust-by-example/)\n- [Rust Tutorial](https://doc.rust-lang.org/book/)\n\nOverall, Rust is a powerful and versatile language that is well-suited for systems programming and other applications where performance, safety, and concurrency are critical.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 44,
"total_tokens": 679,
"completion_tokens": 635,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"kv_transfer_params": null
}
```
Link to the hugging face model dashboard :
https://huggingface.co/Team-ACE/ToolACE-2-Llama-3.1-8B
### Expected behavior
For a non-tool call prompt , the model should not alter its response significantly based on inclusion of tool information in the Chat request. | {
"login": "dvn8weil",
"id": 190058927,
"node_id": "U_kgDOC1QRrw",
"avatar_url": "https://avatars.githubusercontent.com/u/190058927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvn8weil",
"html_url": "https://github.com/dvn8weil",
"followers_url": "https://api.github.com/users/dvn8weil/followers",
"following_url": "https://api.github.com/users/dvn8weil/following{/other_user}",
"gists_url": "https://api.github.com/users/dvn8weil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvn8weil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvn8weil/subscriptions",
"organizations_url": "https://api.github.com/users/dvn8weil/orgs",
"repos_url": "https://api.github.com/users/dvn8weil/repos",
"events_url": "https://api.github.com/users/dvn8weil/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvn8weil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39833/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39832/comments | https://api.github.com/repos/huggingface/transformers/issues/39832/events | https://github.com/huggingface/transformers/pull/39832 | 3,281,599,284 | PR_kwDOCUB6oc6hmNyY | 39,832 | add multimodal executorch support | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T20:00:02 | 2025-08-01T22:52:14 | 2025-08-01T22:52:14 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39832",
"html_url": "https://github.com/huggingface/transformers/pull/39832",
"diff_url": "https://github.com/huggingface/transformers/pull/39832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39832.patch",
"merged_at": null
} | New Class: TorchExportableModuleForImageTextLM
Dedicated wrapper for image-text language models:
- Purpose: Handles multimodal models that need inputs_embeds instead of input_ids
- Architecture: Automatically chooses HybridCache vs StaticCache based on model config
- Usage: Takes embeddings from vision encoder + text tokenizer as input
New Class: ImageEncoderExportableModule
Wrapper for vision encoder components:
- Purpose: Exports the vision processing pipeline (vision_tower → multi_modal_projector)
- Function: Converts images to language-compatible embeddings
- Integration: Works with TorchExportableModuleForImageTextLM for complete multimodal export
```
Multimodal Model Export:
# Vision encoder export
vision_encoder = ImageEncoderExportableModule(model)
exported_vision = vision_encoder.export()
# Text decoder export
text_decoder = TorchExportableModuleForImageTextLM(model.language_model)
exported_text = text_decoder.export()
Runtime Usage:
# Process image → embeddings
image_embeddings = exported_vision.module()(pixel_values)
# Process text → embeddings
text_embeddings = model.embed_tokens(text_ids)
# Combined inference
inputs_embeds = torch.cat([image_embeddings, text_embeddings], dim=1)
logits = exported_text.module()(inputs_embeds=inputs_embeds, cache_position=cache_position)
``` | {
"login": "mergennachin",
"id": 1409555,
"node_id": "MDQ6VXNlcjE0MDk1NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1409555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mergennachin",
"html_url": "https://github.com/mergennachin",
"followers_url": "https://api.github.com/users/mergennachin/followers",
"following_url": "https://api.github.com/users/mergennachin/following{/other_user}",
"gists_url": "https://api.github.com/users/mergennachin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mergennachin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mergennachin/subscriptions",
"organizations_url": "https://api.github.com/users/mergennachin/orgs",
"repos_url": "https://api.github.com/users/mergennachin/repos",
"events_url": "https://api.github.com/users/mergennachin/events{/privacy}",
"received_events_url": "https://api.github.com/users/mergennachin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39832/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39832/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39831/comments | https://api.github.com/repos/huggingface/transformers/issues/39831/events | https://github.com/huggingface/transformers/pull/39831 | 3,281,555,518 | PR_kwDOCUB6oc6hmETH | 39,831 | refactor(modeling_llama): make RotaryEmbedding default path explicit | {
"login": "pco111",
"id": 56655972,
"node_id": "MDQ6VXNlcjU2NjU1OTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/56655972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pco111",
"html_url": "https://github.com/pco111",
"followers_url": "https://api.github.com/users/pco111/followers",
"following_url": "https://api.github.com/users/pco111/following{/other_user}",
"gists_url": "https://api.github.com/users/pco111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pco111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pco111/subscriptions",
"organizations_url": "https://api.github.com/users/pco111/orgs",
"repos_url": "https://api.github.com/users/pco111/repos",
"events_url": "https://api.github.com/users/pco111/events{/privacy}",
"received_events_url": "https://api.github.com/users/pco111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-31T19:38:21 | 2025-08-12T11:38:09 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39831",
"html_url": "https://github.com/huggingface/transformers/pull/39831",
"diff_url": "https://github.com/huggingface/transformers/pull/39831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39831.patch",
"merged_at": null
} | # What does this PR do?
This PR refactors `LlamaRotaryEmbedding` to make the initialization for the default `rope_type` more explicit, as suggested in issue #39753.
Instead of relying on the `ROPE_INIT_FUNCTIONS` dictionary for the default case, the code now uses a direct call to `_compute_default_rope_parameters`. The dictionary lookup is reserved for non-default `rope_type` values. This change improves code readability and maintainability by making the default execution path clearer, aligning with the "explicit is better than implicit" philosophy.
<!-- Remove if not applicable -->
Fixes #39753
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Link: https://github.com/huggingface/transformers/issues/39753)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). (Not applicable, as this is an internal refactor with no user-facing changes).
- [ ] Did you write any new necessary tests? (Not applicable, existing tests cover this refactoring).
## Who can review?
@ArthurZucker @gante | {
"login": "pco111",
"id": 56655972,
"node_id": "MDQ6VXNlcjU2NjU1OTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/56655972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pco111",
"html_url": "https://github.com/pco111",
"followers_url": "https://api.github.com/users/pco111/followers",
"following_url": "https://api.github.com/users/pco111/following{/other_user}",
"gists_url": "https://api.github.com/users/pco111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pco111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pco111/subscriptions",
"organizations_url": "https://api.github.com/users/pco111/orgs",
"repos_url": "https://api.github.com/users/pco111/repos",
"events_url": "https://api.github.com/users/pco111/events{/privacy}",
"received_events_url": "https://api.github.com/users/pco111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39831/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39830/comments | https://api.github.com/repos/huggingface/transformers/issues/39830/events | https://github.com/huggingface/transformers/pull/39830 | 3,281,473,707 | PR_kwDOCUB6oc6hlywL | 39,830 | fix: deprecate plot_keypoint_matching and make visualize_keypoint_matching for all Keypoint Matching models | {
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T19:01:58 | 2025-08-01T16:33:04 | 2025-08-01T16:29:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39830",
"html_url": "https://github.com/huggingface/transformers/pull/39830",
"diff_url": "https://github.com/huggingface/transformers/pull/39830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39830.patch",
"merged_at": "2025-08-01T16:29:57"
} | # What does this PR do?
Adds `visualize_keypoint_matching` to LightGlue and SuperGlue image processor
Deprecate `plot_keypoint_matching` from LightGlue image processor
## Who can review?
@qubvel @stevhliu
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39830/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39829/comments | https://api.github.com/repos/huggingface/transformers/issues/39829/events | https://github.com/huggingface/transformers/pull/39829 | 3,281,024,889 | PR_kwDOCUB6oc6hkQo_ | 39,829 | [serve] allow array `content` inputs for LLMs | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T16:13:52 | 2025-08-13T10:26:22 | 2025-08-13T10:26:19 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39829",
"html_url": "https://github.com/huggingface/transformers/pull/39829",
"diff_url": "https://github.com/huggingface/transformers/pull/39829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39829.patch",
"merged_at": "2025-08-13T10:26:19"
} | # What does this PR do?
LLMs in `transformers serve` now accept `content` messages containing arrays, as [expected in the API](https://platform.openai.com/docs/api-reference/chat/create).
Adds tests to prevent regressions.
Example of a command that gets fixed in this PR:
```bash
curl -X POST http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{"messages": [{"role": "system", "content": [{"text": "Can you help me write tests?", "type": "text"}]}], "temperature": 0.9, "max_tokens": 1000, "stream": true, "model": "Qwen/Qwen2.5-0.5B-Instruct"}'
```
Fixes #39791 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39829/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39828/comments | https://api.github.com/repos/huggingface/transformers/issues/39828/events | https://github.com/huggingface/transformers/pull/39828 | 3,281,001,997 | PR_kwDOCUB6oc6hkLxc | 39,828 | fix test_working_of_tp failure of accelerate ut | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T16:06:10 | 2025-08-05T17:19:03 | 2025-08-05T08:52:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39828",
"html_url": "https://github.com/huggingface/transformers/pull/39828",
"diff_url": "https://github.com/huggingface/transformers/pull/39828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39828.patch",
"merged_at": "2025-08-05T08:52:57"
} | ### Symptom
`accelerate` UT `pytest -rA tests/tp/test_tp.py::TPIntegrationTest::test_working_of_tp` failed w/ below log
> stderr: [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/integrations/tensor_parallel.py", line 1082, in distribute_model
> stderr: [rank0]: tp_plan = getattr(model, "_tp_plan", {}).copy()
> stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> stderr: [rank0]: AttributeError: 'NoneType' object has no attribute 'copy'
### Root Cause
`model._tp_plan` is `None` rather than missing, so `getattr(type(model), "_tp_plan", {})` will return `None`, and lead to the above error.
### Proposed Fix
use `getattr(type(model), "_tp_plan", None) or {}` to avoid `None`.
### Results
ut passed
@SunMarc @ydshieh , pls help review, thx very much. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39828/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39827/comments | https://api.github.com/repos/huggingface/transformers/issues/39827/events | https://github.com/huggingface/transformers/pull/39827 | 3,280,905,648 | PR_kwDOCUB6oc6hj2oC | 39,827 | [WIP] RoPE refactor | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T15:33:55 | 2025-08-05T10:23:04 | 2025-08-05T10:23:03 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39827",
"html_url": "https://github.com/huggingface/transformers/pull/39827",
"diff_url": "https://github.com/huggingface/transformers/pull/39827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39827.patch",
"merged_at": null
} | # What does this PR do?
Version 1
We want to support defining different RoPE params within one model, similar to how layer types defines different attention patterns. The reason: some models are already using global and local rope params (Gemma3, ModernBert), and we were forced to do monkey patching to support it
This PR is one option how it can be done. Note that it is very much breaking as the RoPE layers will be returning a dict of "cos/sin" when we have several RoPE params. Version 2 is available in https://github.com/huggingface/transformers/pull/39847, and is less breaking imo
Open for discussion, personally I like version 2 | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39827/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39826/comments | https://api.github.com/repos/huggingface/transformers/issues/39826/events | https://github.com/huggingface/transformers/pull/39826 | 3,280,891,352 | PR_kwDOCUB6oc6hjzfP | 39,826 | Add MetaCLIP 2 | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-31T15:29:30 | 2025-08-27T07:56:43 | 2025-08-20T07:25:43 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39826",
"html_url": "https://github.com/huggingface/transformers/pull/39826",
"diff_url": "https://github.com/huggingface/transformers/pull/39826.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39826.patch",
"merged_at": "2025-08-20T07:25:43"
} | # What does this PR do?
This PR adds MetaCLIP 2 using modular. Alternative of #39821
It adapts `CLIPProcessor` to support any tokenizer using `AutoTokenizer`.
To do:
- [x] update integration test and convert remaining checkpoints
- [x] transfer checkpoints to the Meta org
Failing CI is unrelated | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39826/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39825/comments | https://api.github.com/repos/huggingface/transformers/issues/39825/events | https://github.com/huggingface/transformers/pull/39825 | 3,280,845,860 | PR_kwDOCUB6oc6hjpnq | 39,825 | [serve] guard imports | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T15:15:45 | 2025-08-18T15:28:50 | 2025-08-18T15:28:10 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39825",
"html_url": "https://github.com/huggingface/transformers/pull/39825",
"diff_url": "https://github.com/huggingface/transformers/pull/39825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39825.patch",
"merged_at": "2025-08-18T15:28:10"
} | # What does this PR do?
Some imports were not properly guarded, so `transformers env` was failing on a base install. This PR fixes it.
Fixes #39779
Supercedes #39790 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39825/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39825/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39824/comments | https://api.github.com/repos/huggingface/transformers/issues/39824/events | https://github.com/huggingface/transformers/pull/39824 | 3,280,835,090 | PR_kwDOCUB6oc6hjnQF | 39,824 | [DOCS] : Improved mimi model card | {
"login": "rohitthewanderer",
"id": 103673464,
"node_id": "U_kgDOBi3ueA",
"avatar_url": "https://avatars.githubusercontent.com/u/103673464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohitthewanderer",
"html_url": "https://github.com/rohitthewanderer",
"followers_url": "https://api.github.com/users/rohitthewanderer/followers",
"following_url": "https://api.github.com/users/rohitthewanderer/following{/other_user}",
"gists_url": "https://api.github.com/users/rohitthewanderer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohitthewanderer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohitthewanderer/subscriptions",
"organizations_url": "https://api.github.com/users/rohitthewanderer/orgs",
"repos_url": "https://api.github.com/users/rohitthewanderer/repos",
"events_url": "https://api.github.com/users/rohitthewanderer/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohitthewanderer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-31T15:12:39 | 2025-08-04T17:07:07 | 2025-08-04T17:07:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39824",
"html_url": "https://github.com/huggingface/transformers/pull/39824",
"diff_url": "https://github.com/huggingface/transformers/pull/39824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39824.patch",
"merged_at": "2025-08-04T17:07:06"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/36979
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ebezzam | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39824/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39823/comments | https://api.github.com/repos/huggingface/transformers/issues/39823/events | https://github.com/huggingface/transformers/pull/39823 | 3,280,726,914 | PR_kwDOCUB6oc6hjPg- | 39,823 | [`attn_implementation`] remove recursive, allows custom kernels with wrappers | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T14:39:37 | 2025-08-01T10:18:30 | 2025-08-01T10:18:28 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39823",
"html_url": "https://github.com/huggingface/transformers/pull/39823",
"diff_url": "https://github.com/huggingface/transformers/pull/39823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39823.patch",
"merged_at": "2025-08-01T10:18:28"
} | # What does this PR do?
This PR enables `paged_attention` usage:
```python
model = AutoModelForCausalLM.from_prertained("path", attn_implementation="paged_attention|kernels-community/flash-attn3")
```
This means I want to use the `paged_attention` wrapper of `transformers` -> no attention mask, no input pre-processing
```python
model = AutoModelForCausalLM.from_prertained("path", attn_implementation="sdpa|kernels-community/flash-attn3")
```
would use the `sdpa` wrapper from https://github.com/huggingface/transformers/blob/fix-paged-wrapper/src/transformers/integrations/sdpa_attention.py#L16-L16 | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39823/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39822/comments | https://api.github.com/repos/huggingface/transformers/issues/39822/events | https://github.com/huggingface/transformers/pull/39822 | 3,280,621,307 | PR_kwDOCUB6oc6hi4YU | 39,822 | chore: update DETR model card | {
"login": "arpon-kapuria",
"id": 83688431,
"node_id": "MDQ6VXNlcjgzNjg4NDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/83688431?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arpon-kapuria",
"html_url": "https://github.com/arpon-kapuria",
"followers_url": "https://api.github.com/users/arpon-kapuria/followers",
"following_url": "https://api.github.com/users/arpon-kapuria/following{/other_user}",
"gists_url": "https://api.github.com/users/arpon-kapuria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arpon-kapuria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arpon-kapuria/subscriptions",
"organizations_url": "https://api.github.com/users/arpon-kapuria/orgs",
"repos_url": "https://api.github.com/users/arpon-kapuria/repos",
"events_url": "https://api.github.com/users/arpon-kapuria/events{/privacy}",
"received_events_url": "https://api.github.com/users/arpon-kapuria/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T14:09:39 | 2025-08-04T19:25:54 | 2025-08-04T19:25:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39822",
"html_url": "https://github.com/huggingface/transformers/pull/39822",
"diff_url": "https://github.com/huggingface/transformers/pull/39822.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39822.patch",
"merged_at": "2025-08-04T19:25:54"
} | # What does this PR do?
This PR updates the model card for DETR, following the template outlined in the issue.
## Before submitting
- [x] This PR improves the docs.
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39822/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39821/comments | https://api.github.com/repos/huggingface/transformers/issues/39821/events | https://github.com/huggingface/transformers/pull/39821 | 3,280,524,873 | PR_kwDOCUB6oc6hijYj | 39,821 | Support MetaCLIP 2 | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-31T13:41:06 | 2025-08-01T08:08:13 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39821",
"html_url": "https://github.com/huggingface/transformers/pull/39821",
"diff_url": "https://github.com/huggingface/transformers/pull/39821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39821.patch",
"merged_at": null
} | # What does this PR do?
Meta just released [MetaCLIP 2 (worldwide)](https://github.com/facebookresearch/MetaCLIP?tab=readme-ov-file#pre-trained-models), new CLIP models trained on 300+ languages.
However, when making them compatible with `modeling_clip.py`, I noticed there's a mistake with the original OpenAI CLIP models.
* they have the EOS token ID set to 2 in the config: https://huggingface.co/openai/clip-vit-large-patch14/blob/main/config.json#L25. However, the OpenAI CLIP models don't use 2 as EOS token ID. They use 49407. You can check this when tokenizing text using `CLIPTokenizer`:
```python
>>> from transformers import CLIPTokenizer
>>> tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
>>> input_ids = tokenizer("hello world", return_tensors="pt")
>>> input_ids
{'input_ids': tensor([[49406, 3306, 1002, 49407]]), 'attention_mask': tensor([[1, 1, 1, 1]])}
```
* this was fixed in #24773. For backwards compatibility, a block of code was kept and it only runs in case a model has wrongly set EOS token ID == 2.
* since MetaCLIP 2 **actually** uses EOS token ID == 2 with a multilingual tokenizer (https://huggingface.co/facebook/xlm-v-base), it needs the "else" block which gets the EOS token from each sequence along the batch dimension.
* this means we'd need to adapt the "if" block. I propose here to simply check whether the max values in each row of the input_ids corresponds to 49407, the value all OpenAI CLIP models use.
cc @ydshieh | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39821/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/39821/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39820/comments | https://api.github.com/repos/huggingface/transformers/issues/39820/events | https://github.com/huggingface/transformers/pull/39820 | 3,280,388,384 | PR_kwDOCUB6oc6hiFTO | 39,820 | [cohere2 vision] move doc to multimodal section | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T12:59:38 | 2025-07-31T13:13:16 | 2025-07-31T13:13:02 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39820",
"html_url": "https://github.com/huggingface/transformers/pull/39820",
"diff_url": "https://github.com/huggingface/transformers/pull/39820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39820.patch",
"merged_at": "2025-07-31T13:13:02"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39820/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39819/comments | https://api.github.com/repos/huggingface/transformers/issues/39819/events | https://github.com/huggingface/transformers/pull/39819 | 3,280,316,405 | PR_kwDOCUB6oc6hh1ZL | 39,819 | Fix bad markdown links | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T12:40:11 | 2025-07-31T16:14:15 | 2025-07-31T16:14:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39819",
"html_url": "https://github.com/huggingface/transformers/pull/39819",
"diff_url": "https://github.com/huggingface/transformers/pull/39819.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39819.patch",
"merged_at": "2025-07-31T16:14:15"
} | # What does this PR do?
While writing the docs for one model and taking inspiration from another, I noticed bad links to other models.
For example, with [Dia](https://github.com/huggingface/transformers/blob/6ba8a1ff4550b4450a22a0b0d907312955ce0fd5/docs/source/en/model_doc/dia.md?plain=1#L36C1-L37C41), its markdown contains:
```
a pretrained codec model [DAC](./dac.md) is used...
```
The link on DAC link on [the docs](https://huggingface.co/docs/transformers/main/en/model_doc/dia#overview) leads to `404`: https://huggingface.co/docs/transformers/main/en/model_doc/dac.md
**The markdown should be without ".md" extension:**
```
a pretrained codec model [DAC](./dac) is used...
```
To lead to: https://huggingface.co/docs/transformers/main/en/model_doc/dac
See`docs/source/en/conversations.md` for a good and bad example in the same line 🫠 ([markdown](https://github.com/huggingface/transformers/blob/main/docs/source/en/conversations.md?plain=1#L161), bottom of [rendered](https://huggingface.co/docs/transformers/main/en/conversations))
```
> Parameters may not be active for every generated token in MoE models such as [Mixtral](./model_doc/mixtral), [Qwen2MoE](./model_doc/qwen2_moe.md), and [DBRX](./model_doc/dbrx). As a result, MoE models generally have much lower memory bandwidth requirements and can be faster than a regular LLM of the same size. However, techniques like speculative decoding are ineffective with MoE models because parameters become activated with each new speculated token.
```
`qwen2_moe` has bad link while `mixtral` and `dbrx` are fine.
---
# How I made the changes
Within VS code, I did the following replace within the `docs` folder (and a couple other manual fixes).
- Search: `(\[[^\]]+\]\((?!https?:\/\/|www\.)[^)\s]*)\.md(\))`
- Replace: `$1$2`
@stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39819/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39818/comments | https://api.github.com/repos/huggingface/transformers/issues/39818/events | https://github.com/huggingface/transformers/issues/39818 | 3,280,308,340 | I_kwDOCUB6oc7DhYh0 | 39,818 | Qwen2-VL err | {
"login": "Yan0613",
"id": 77104028,
"node_id": "MDQ6VXNlcjc3MTA0MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/77104028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yan0613",
"html_url": "https://github.com/Yan0613",
"followers_url": "https://api.github.com/users/Yan0613/followers",
"following_url": "https://api.github.com/users/Yan0613/following{/other_user}",
"gists_url": "https://api.github.com/users/Yan0613/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yan0613/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yan0613/subscriptions",
"organizations_url": "https://api.github.com/users/Yan0613/orgs",
"repos_url": "https://api.github.com/users/Yan0613/repos",
"events_url": "https://api.github.com/users/Yan0613/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yan0613/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T12:38:11 | 2025-07-31T12:38:17 | 2025-07-31T12:38:17 | NONE | null | null | null | null | When adapting the [example script for distributed GPU inference from the documentation](https://huggingface.co/docs/transformers/perf_infer_gpu_multi) using tensor parallelism for the Qwen-2.5-VL-family, the following errors arise.
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/rupert/qwen2.5-vl-test/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 447, in propagate_op_sharding_non_cached
[rank0]: output_sharding = sharding_prop_func(op_schema)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/rupert/qwen2.5-vl-test/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_conv_ops.py", line 29, in convolution_rules
[rank0]: assert isinstance(bias_spec, DTensorSpec)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AssertionError
```
from this exception (callstack abridged):
```
[rank0]: File "/home/rupert/qwen2.5-vl-test/.venv/lib/python3.12/site-packages/torch/distributed/tensor/_sharding_prop.py", line 451, in propagate_op_sharding_non_cached
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: Sharding propagation failed on op Op(op=aten.convolution.default, args_schema=Spec(R on (1064, 3, 2, 14, 14)), Spec(R on (1280, 3, 2, 14, 14)), None, [2, 14, 14], [0, 0, 0], [1, 1, 1], False, [0, 0, 0], 1 @ mesh: (1,)).
``` | {
"login": "Yan0613",
"id": 77104028,
"node_id": "MDQ6VXNlcjc3MTA0MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/77104028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yan0613",
"html_url": "https://github.com/Yan0613",
"followers_url": "https://api.github.com/users/Yan0613/followers",
"following_url": "https://api.github.com/users/Yan0613/following{/other_user}",
"gists_url": "https://api.github.com/users/Yan0613/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yan0613/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yan0613/subscriptions",
"organizations_url": "https://api.github.com/users/Yan0613/orgs",
"repos_url": "https://api.github.com/users/Yan0613/repos",
"events_url": "https://api.github.com/users/Yan0613/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yan0613/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39818/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39817/comments | https://api.github.com/repos/huggingface/transformers/issues/39817/events | https://github.com/huggingface/transformers/pull/39817 | 3,280,063,386 | PR_kwDOCUB6oc6hhACa | 39,817 | Update documentation for Cohere2Vision models | {
"login": "kyle-cohere",
"id": 155960770,
"node_id": "U_kgDOCUvFwg",
"avatar_url": "https://avatars.githubusercontent.com/u/155960770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyle-cohere",
"html_url": "https://github.com/kyle-cohere",
"followers_url": "https://api.github.com/users/kyle-cohere/followers",
"following_url": "https://api.github.com/users/kyle-cohere/following{/other_user}",
"gists_url": "https://api.github.com/users/kyle-cohere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyle-cohere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyle-cohere/subscriptions",
"organizations_url": "https://api.github.com/users/kyle-cohere/orgs",
"repos_url": "https://api.github.com/users/kyle-cohere/repos",
"events_url": "https://api.github.com/users/kyle-cohere/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyle-cohere/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T11:16:45 | 2025-07-31T11:59:26 | 2025-07-31T11:58:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39817",
"html_url": "https://github.com/huggingface/transformers/pull/39817",
"diff_url": "https://github.com/huggingface/transformers/pull/39817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39817.patch",
"merged_at": "2025-07-31T11:58:45"
} | # What does this PR do?
Update the documentation to include an example of using Cohere Command A Vision in a pipeline, and add it to the list of vision models.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@zucchini-nlp
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39817/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39816/comments | https://api.github.com/repos/huggingface/transformers/issues/39816/events | https://github.com/huggingface/transformers/pull/39816 | 3,279,979,046 | PR_kwDOCUB6oc6hgt1y | 39,816 | Refactor ViT-like models | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T10:45:44 | 2025-08-26T09:14:06 | 2025-08-26T09:14:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39816",
"html_url": "https://github.com/huggingface/transformers/pull/39816",
"diff_url": "https://github.com/huggingface/transformers/pull/39816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39816.patch",
"merged_at": "2025-08-26T09:14:06"
} | # What does this PR do?
Refactor ViT and dependent models to use `@check_model_inputs` and `@can_return_tuple` decorator to remove all the logic for intermediate `hidden_states` and `attentions` capture
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39816/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39815/comments | https://api.github.com/repos/huggingface/transformers/issues/39815/events | https://github.com/huggingface/transformers/pull/39815 | 3,279,968,764 | PR_kwDOCUB6oc6hgrmE | 39,815 | [chat template] update when "push_to_hub" | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T10:41:44 | 2025-10-15T13:50:00 | 2025-10-15T13:50:00 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39815",
"html_url": "https://github.com/huggingface/transformers/pull/39815",
"diff_url": "https://github.com/huggingface/transformers/pull/39815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39815.patch",
"merged_at": "2025-10-15T13:50:00"
} | # What does this PR do?
As discussed internally, we don't have tests for chat templates loading-pushing to hub and a few bugs were encountered recently
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39815/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39814/comments | https://api.github.com/repos/huggingface/transformers/issues/39814/events | https://github.com/huggingface/transformers/issues/39814 | 3,279,751,841 | I_kwDOCUB6oc7DfQqh | 39,814 | Flash Attention fails with non aligned position_ids | {
"login": "alessiodevoto",
"id": 50107094,
"node_id": "MDQ6VXNlcjUwMTA3MDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/50107094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alessiodevoto",
"html_url": "https://github.com/alessiodevoto",
"followers_url": "https://api.github.com/users/alessiodevoto/followers",
"following_url": "https://api.github.com/users/alessiodevoto/following{/other_user}",
"gists_url": "https://api.github.com/users/alessiodevoto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alessiodevoto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alessiodevoto/subscriptions",
"organizations_url": "https://api.github.com/users/alessiodevoto/orgs",
"repos_url": "https://api.github.com/users/alessiodevoto/repos",
"events_url": "https://api.github.com/users/alessiodevoto/events{/privacy}",
"received_events_url": "https://api.github.com/users/alessiodevoto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T09:28:59 | 2025-08-07T17:26:24 | 2025-08-07T17:26:24 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@winglian
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
### Overview
Hi! In the latest release (v4.54.1), there was a change in [how the max_length is computed when using flash attention](https://github.com/huggingface/transformers/blob/cb289ad243a5aa4c76719f4df1d4c07171e338da/src/transformers/modeling_flash_attention_utils.py#L243). This raises an error if we forward a sequence where no position_ids == 0.
### Code to reproduce
This is minimal code to reproduce. This **works fine** when using` attn_implementation="eager"`, but fails with Flash Attention.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Use Llama-3.2-1B-Instruct for testing, but it applies to all models
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B-Instruct", torch_dtype=torch.bfloat16, device_map="auto", attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B-Instruct")
# create a dummy input
input_ids = tokenizer.encode("All good here how are you?", return_tensors="pt").to(model.device)
# the position ids start from 1 instead of 0
position_ids = torch.arange(1, input_ids.shape[1]+1).unsqueeze(0).to(model.device)
output = model(input_ids, position_ids=position_ids) # Fails
# RuntimeError: max(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
```
I think the problem is that when we call `diff()` in [this line](https://github.com/huggingface/transformers/blob/cb289ad243a5aa4c76719f4df1d4c07171e338da/src/transformers/modeling_flash_attention_utils.py#L243), we get an empty tensor (as no position_ids equal 0) and then `max()` fails.
### Why it matters
Passing position_ids without zero elements makes sense in all those cases where you have a KV Cache and want to generate starting from there. We maintain [NVIDIA/KVPress](https://github.com/NVIDIA/kvpress), a library for KV Cache compression, and rely on this for our pipeline.
### Expected behavior
No errors. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39814/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39814/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39813/comments | https://api.github.com/repos/huggingface/transformers/issues/39813/events | https://github.com/huggingface/transformers/pull/39813 | 3,279,670,512 | PR_kwDOCUB6oc6hfqkX | 39,813 | [docs] fix korean docs yet again | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T09:00:07 | 2025-07-31T12:44:38 | 2025-07-31T09:13:25 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39813",
"html_url": "https://github.com/huggingface/transformers/pull/39813",
"diff_url": "https://github.com/huggingface/transformers/pull/39813.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39813.patch",
"merged_at": "2025-07-31T09:13:25"
} | # What does this PR do?
Something went wrong merging main into #39660 , and CI is still red.
`doc-builder build transformers docs/source/ko/ --language ko --clean` is green on my side. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39813/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39812/comments | https://api.github.com/repos/huggingface/transformers/issues/39812/events | https://github.com/huggingface/transformers/issues/39812 | 3,279,629,710 | I_kwDOCUB6oc7Dey2O | 39,812 | Why `lm-head` weight still exists with `"tie_word_embeddings": true` | {
"login": "Kelvinlby",
"id": 76610777,
"node_id": "MDQ6VXNlcjc2NjEwNzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/76610777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kelvinlby",
"html_url": "https://github.com/Kelvinlby",
"followers_url": "https://api.github.com/users/Kelvinlby/followers",
"following_url": "https://api.github.com/users/Kelvinlby/following{/other_user}",
"gists_url": "https://api.github.com/users/Kelvinlby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kelvinlby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kelvinlby/subscriptions",
"organizations_url": "https://api.github.com/users/Kelvinlby/orgs",
"repos_url": "https://api.github.com/users/Kelvinlby/repos",
"events_url": "https://api.github.com/users/Kelvinlby/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kelvinlby/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T08:45:32 | 2025-08-04T23:51:21 | 2025-08-04T23:51:21 | NONE | null | null | null | null | ### System Info
When I was directly loading the `model.safetetensors` file from qwen3-0.6b, I find that there is a weight stored with name "lm_head.weight", but the `config.json` shows `"tie_word_embeddings": true`.
So what EXACTLY does the tie word embedding do? I can't find it anywhere...
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from safetensors.numpy import load_file
param = load_file('model.safetensors')
print(param.keys())
```
Then you will see `lm_head.weight`
### Expected behavior
As far as i understand no projection layer is needed with tied word embedding. | {
"login": "Kelvinlby",
"id": 76610777,
"node_id": "MDQ6VXNlcjc2NjEwNzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/76610777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kelvinlby",
"html_url": "https://github.com/Kelvinlby",
"followers_url": "https://api.github.com/users/Kelvinlby/followers",
"following_url": "https://api.github.com/users/Kelvinlby/following{/other_user}",
"gists_url": "https://api.github.com/users/Kelvinlby/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kelvinlby/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kelvinlby/subscriptions",
"organizations_url": "https://api.github.com/users/Kelvinlby/orgs",
"repos_url": "https://api.github.com/users/Kelvinlby/repos",
"events_url": "https://api.github.com/users/Kelvinlby/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kelvinlby/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39812/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39811/comments | https://api.github.com/repos/huggingface/transformers/issues/39811/events | https://github.com/huggingface/transformers/issues/39811 | 3,279,531,146 | I_kwDOCUB6oc7DeayK | 39,811 | Missing einops dependency causing ModuleNotFoundError | {
"login": "iforgetmyname",
"id": 14368888,
"node_id": "MDQ6VXNlcjE0MzY4ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/14368888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iforgetmyname",
"html_url": "https://github.com/iforgetmyname",
"followers_url": "https://api.github.com/users/iforgetmyname/followers",
"following_url": "https://api.github.com/users/iforgetmyname/following{/other_user}",
"gists_url": "https://api.github.com/users/iforgetmyname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iforgetmyname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iforgetmyname/subscriptions",
"organizations_url": "https://api.github.com/users/iforgetmyname/orgs",
"repos_url": "https://api.github.com/users/iforgetmyname/repos",
"events_url": "https://api.github.com/users/iforgetmyname/events{/privacy}",
"received_events_url": "https://api.github.com/users/iforgetmyname/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T08:09:43 | 2025-08-13T03:38:01 | 2025-08-12T15:04:20 | NONE | null | null | null | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.53.2
- Platform: Linux-4.19.90-vhulk2211.3.0.h1543.eulerosv2r10.aarch64-aarch64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cpu (NPU)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using NPU in script?: yes
- NPU type: Ascend910B4
- CANN version: 8.2.RC1.alpha003
### Who can help?
@ivarflakstad
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Background: running SGLang on Ascend NPU
1. Run with ascend/cann:8.2.rc1.alpha003-910b-ubuntu22.04-py3.11 docker image
2. Install deps: `pip install torch==2.6.0 torchvision==0.21.0 --index-url https://download.pytorch.org/whl/cpu && pip install torch_npu==2.6.0`
3. Clone sglang and run `pip install -e "python[srt_npu]"`
4. Check `transformers env`
<img width="898" height="641" alt="Image" src="https://github.com/user-attachments/assets/4fee73a0-1dc7-48b5-a126-9b50e8ece8d0" />
### Expected behavior
import transformers successfully | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39811/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39811/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39810/comments | https://api.github.com/repos/huggingface/transformers/issues/39810/events | https://github.com/huggingface/transformers/pull/39810 | 3,279,457,892 | PR_kwDOCUB6oc6he9KH | 39,810 | [Model] Cohere2 Vision | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T07:42:13 | 2025-07-31T10:57:35 | 2025-07-31T10:57:34 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39810",
"html_url": "https://github.com/huggingface/transformers/pull/39810",
"diff_url": "https://github.com/huggingface/transformers/pull/39810.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39810.patch",
"merged_at": "2025-07-31T10:57:34"
} | # What does this PR do?
Add new model
A few unrelated but needed changes:
- Cache in multi-GPU wasn't working, needed to fix `layer_device_map` and `model.get_decoder()`
- The token isn't passed when downloading additional chat templates
- The new `check_outputs` don't work with VLMs because we don't know what type of LM backbone a model uses. So we just check if the module names overlap | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39810/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39809/comments | https://api.github.com/repos/huggingface/transformers/issues/39809/events | https://github.com/huggingface/transformers/pull/39809 | 3,279,430,315 | PR_kwDOCUB6oc6he3SK | 39,809 | Fix broken links | {
"login": "oToToT",
"id": 8341564,
"node_id": "MDQ6VXNlcjgzNDE1NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8341564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oToToT",
"html_url": "https://github.com/oToToT",
"followers_url": "https://api.github.com/users/oToToT/followers",
"following_url": "https://api.github.com/users/oToToT/following{/other_user}",
"gists_url": "https://api.github.com/users/oToToT/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oToToT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oToToT/subscriptions",
"organizations_url": "https://api.github.com/users/oToToT/orgs",
"repos_url": "https://api.github.com/users/oToToT/repos",
"events_url": "https://api.github.com/users/oToToT/events{/privacy}",
"received_events_url": "https://api.github.com/users/oToToT/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T07:32:00 | 2025-07-31T13:23:30 | 2025-07-31T13:23:04 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39809",
"html_url": "https://github.com/huggingface/transformers/pull/39809",
"diff_url": "https://github.com/huggingface/transformers/pull/39809.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39809.patch",
"merged_at": "2025-07-31T13:23:04"
} | # What does this PR do?
This PR fix some broken links by replacing links in the form of `[text]((url))` to `[text](url)`. This is the correct format of a url in the markdown.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu @ydshieh | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39809/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39808/comments | https://api.github.com/repos/huggingface/transformers/issues/39808/events | https://github.com/huggingface/transformers/pull/39808 | 3,279,416,682 | PR_kwDOCUB6oc6he0bQ | 39,808 | 🌐 [i18n-KO] Translated `gpt2.md` to Korean | {
"login": "taemincode",
"id": 187865781,
"node_id": "U_kgDOCzKatQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187865781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taemincode",
"html_url": "https://github.com/taemincode",
"followers_url": "https://api.github.com/users/taemincode/followers",
"following_url": "https://api.github.com/users/taemincode/following{/other_user}",
"gists_url": "https://api.github.com/users/taemincode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taemincode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taemincode/subscriptions",
"organizations_url": "https://api.github.com/users/taemincode/orgs",
"repos_url": "https://api.github.com/users/taemincode/repos",
"events_url": "https://api.github.com/users/taemincode/events{/privacy}",
"received_events_url": "https://api.github.com/users/taemincode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T07:27:12 | 2025-08-13T17:00:25 | 2025-08-13T17:00:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39808",
"html_url": "https://github.com/huggingface/transformers/pull/39808",
"diff_url": "https://github.com/huggingface/transformers/pull/39808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39808.patch",
"merged_at": "2025-08-13T17:00:25"
} | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `gpt2.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 KREW 팀원들에게 리뷰를 요청하는 아래 주석을 노출해주세요!-->
May you please review this PR?
<!-- @jungnerd, @luckyvickyricky, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S -->
<!-- @harheem, @nsbg, @Youngdong2, @xhaktm00, @ssunbear, @ChoHyoungSeo, @judy-choi -->
<!-- @4N3MONE, @Kim-Ju-won, @ahnjj, @FacerAin, @ssum21, @TaskerJang, @HyunZ118 -->
<!-- @yijun-lee, @songi104, @chhaewxn, @AhnJoonSung, @jihyun-0611, @seopp, @pyapyapya -->
@yijun-lee @harheem @4N3MONE @jungnerd
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39808/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39808/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39807/comments | https://api.github.com/repos/huggingface/transformers/issues/39807/events | https://github.com/huggingface/transformers/pull/39807 | 3,279,371,002 | PR_kwDOCUB6oc6hequy | 39,807 | 🌐 [i18n-KO] Translated `bamba.md` to Korean | {
"login": "taemincode",
"id": 187865781,
"node_id": "U_kgDOCzKatQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187865781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taemincode",
"html_url": "https://github.com/taemincode",
"followers_url": "https://api.github.com/users/taemincode/followers",
"following_url": "https://api.github.com/users/taemincode/following{/other_user}",
"gists_url": "https://api.github.com/users/taemincode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taemincode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taemincode/subscriptions",
"organizations_url": "https://api.github.com/users/taemincode/orgs",
"repos_url": "https://api.github.com/users/taemincode/repos",
"events_url": "https://api.github.com/users/taemincode/events{/privacy}",
"received_events_url": "https://api.github.com/users/taemincode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-31T07:08:29 | 2025-08-11T07:09:46 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39807",
"html_url": "https://github.com/huggingface/transformers/pull/39807",
"diff_url": "https://github.com/huggingface/transformers/pull/39807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39807.patch",
"merged_at": null
} | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `bamba.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 KREW 팀원들에게 리뷰를 요청하는 아래 주석을 노출해주세요!-->
May you please review this PR?
<!-- @jungnerd, @luckyvickyricky, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S -->
<!-- @harheem, @nsbg, @Youngdong2, @xhaktm00, @ssunbear, @ChoHyoungSeo, @judy-choi -->
<!-- @4N3MONE, @Kim-Ju-won, @ahnjj, @FacerAin, @ssum21, @TaskerJang, @HyunZ118 -->
<!-- @yijun-lee, @songi104, @chhaewxn, @AhnJoonSung, @jihyun-0611, @seopp, @pyapyapya -->
@yijun-lee @harheem @4N3MONE @jungnerd
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
<!-- @stevhliu May you please review this PR? --> | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39807/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39806/comments | https://api.github.com/repos/huggingface/transformers/issues/39806/events | https://github.com/huggingface/transformers/pull/39806 | 3,279,175,258 | PR_kwDOCUB6oc6heBcJ | 39,806 | Enable SIM rules | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-31T05:32:00 | 2025-08-12T22:29:54 | 2025-08-12T12:14:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39806",
"html_url": "https://github.com/huggingface/transformers/pull/39806",
"diff_url": "https://github.com/huggingface/transformers/pull/39806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39806.patch",
"merged_at": "2025-08-12T12:14:26"
} | # What does this PR do?
Enable SIM rules of ruff, except those for code styles. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39806/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39805/comments | https://api.github.com/repos/huggingface/transformers/issues/39805/events | https://github.com/huggingface/transformers/pull/39805 | 3,279,063,176 | PR_kwDOCUB6oc6hdpsb | 39,805 | GLM-4.5V Model Support | {
"login": "zRzRzRzRzRzRzR",
"id": 93239683,
"node_id": "U_kgDOBY65gw",
"avatar_url": "https://avatars.githubusercontent.com/u/93239683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zRzRzRzRzRzRzR",
"html_url": "https://github.com/zRzRzRzRzRzRzR",
"followers_url": "https://api.github.com/users/zRzRzRzRzRzRzR/followers",
"following_url": "https://api.github.com/users/zRzRzRzRzRzRzR/following{/other_user}",
"gists_url": "https://api.github.com/users/zRzRzRzRzRzRzR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zRzRzRzRzRzRzR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zRzRzRzRzRzRzR/subscriptions",
"organizations_url": "https://api.github.com/users/zRzRzRzRzRzRzR/orgs",
"repos_url": "https://api.github.com/users/zRzRzRzRzRzRzR/repos",
"events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/events{/privacy}",
"received_events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-31T04:10:29 | 2025-08-21T08:44:27 | 2025-08-08T15:39:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39805",
"html_url": "https://github.com/huggingface/transformers/pull/39805",
"diff_url": "https://github.com/huggingface/transformers/pull/39805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39805.patch",
"merged_at": "2025-08-08T15:39:52"
} | This PR will complete two contents
1. Modifications to default parameters for GLM-4.1V
2. Adding GLM-4.5V Model | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39805/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39804/comments | https://api.github.com/repos/huggingface/transformers/issues/39804/events | https://github.com/huggingface/transformers/issues/39804 | 3,279,029,624 | I_kwDOCUB6oc7DcgV4 | 39,804 | Fine tuning qwen2.5 error | {
"login": "shaojun0",
"id": 104407395,
"node_id": "U_kgDOBjkhYw",
"avatar_url": "https://avatars.githubusercontent.com/u/104407395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaojun0",
"html_url": "https://github.com/shaojun0",
"followers_url": "https://api.github.com/users/shaojun0/followers",
"following_url": "https://api.github.com/users/shaojun0/following{/other_user}",
"gists_url": "https://api.github.com/users/shaojun0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaojun0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaojun0/subscriptions",
"organizations_url": "https://api.github.com/users/shaojun0/orgs",
"repos_url": "https://api.github.com/users/shaojun0/repos",
"events_url": "https://api.github.com/users/shaojun0/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaojun0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T03:48:42 | 2025-10-03T08:03:06 | 2025-10-03T08:03:06 | NONE | null | null | null | null | ### System Info
| key | value |
| -------------------- | ----------- |
| transformers version | 4.53.3 |
| PyTorch version | 2.7.1 |
| PyTorch_npu version | 2.7.1 |
| deepspeed version | 0.17.2 |
| NPU | ascend 910b |
### Who can help?
@ivarflakstad @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
When using tensor parallel training for Qwen2.5, the following error was reported:
``````
[rank1]: Traceback (most recent call last):
[rank1]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank1]: run_only_decoder_deepspeed()
[rank1]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank1]: trainer.train()
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank1]: return inner_training_loop(
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank1]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank1]: outputs = model(**inputs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank1]: ret_val = func(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank1]: loss = self.module(*inputs, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank1]: return inner()
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank1]: result = forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank1]: output = func(self, *args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank1]: outputs = self.model(
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank1]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank1]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank1]: hidden_states = blk(
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank1]: return super().__call__(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank1]: hidden_states = hidden_states + self.attn(
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 237, in forward
[rank1]: query_states, key_states = apply_rotary_pos_emb_vision(query_states, key_states, cos, sin)
[rank1]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 153, in apply_rotary_pos_emb_vision
[rank1]: q_embed = (q * cos) + (rotate_half(q) * sin)
[rank1]: RuntimeError: The size of tensor a (40) must match the size of tensor b (80) at non-singleton dimension 2
[rank2]: Traceback (most recent call last):
[rank2]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank2]: run_only_decoder_deepspeed()
[rank2]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank2]: trainer.train()
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank2]: return inner_training_loop(
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank2]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank2]: outputs = model(**inputs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank2]: ret_val = func(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank2]: loss = self.module(*inputs, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank2]: return inner()
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank2]: result = forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank2]: output = func(self, *args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank2]: outputs = self.model(
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank2]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank2]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank2]: hidden_states = blk(
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank2]: return super().__call__(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank2]: hidden_states = hidden_states + self.attn(
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank2]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank2]: RuntimeError: shape '[32, 3, 0, -1]' is invalid for input of size 30720
[2025-07-25 12:28:12,527] [WARNING] [lr_schedules.py:686:get_lr] Attempting to get learning rate from scheduler before it has started
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank0]: run_only_decoder_deepspeed()
[rank0]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank0]: trainer.train()
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank0]: return inner_training_loop(
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank0]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank0]: outputs = model(**inputs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank0]: loss = self.module(*inputs, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank0]: return inner()
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank0]: result = forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank0]: output = func(self, *args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank0]: outputs = self.model(
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank0]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank0]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank0]: hidden_states = blk(
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank0]: return super().__call__(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank0]: hidden_states = hidden_states + self.attn(
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank0]: return self._call_impl(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank0]: return forward_call(*args, **kwargs)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 237, in forward
[rank0]: query_states, key_states = apply_rotary_pos_emb_vision(query_states, key_states, cos, sin)
[rank0]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 153, in apply_rotary_pos_emb_vision
[rank0]: q_embed = (q * cos) + (rotate_half(q) * sin)
[rank0]: RuntimeError: The size of tensor a (40) must match the size of tensor b (80) at non-singleton dimension 2
[rank3]: Traceback (most recent call last):
[rank3]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank3]: run_only_decoder_deepspeed()
[rank3]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank3]: trainer.train()
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank3]: return inner_training_loop(
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank3]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank3]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank3]: outputs = model(**inputs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank3]: ret_val = func(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank3]: loss = self.module(*inputs, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank3]: return inner()
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank3]: result = forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank3]: output = func(self, *args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank3]: outputs = self.model(
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank3]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank3]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank3]: hidden_states = blk(
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank3]: return super().__call__(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank3]: hidden_states = hidden_states + self.attn(
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank3]: return self._call_impl(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank3]: return forward_call(*args, **kwargs)
[rank3]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank3]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank3]: RuntimeError: shape '[32, 3, 0, -1]' is invalid for input of size 30720
[ERROR] 2025-07-25-12:28:14 (PID:1072793, Device:2, RankID:-1) ERR99999 UNKNOWN applicaiton exception
[ERROR] 2025-07-25-12:28:15 (PID:1072792, Device:1, RankID:-1) ERR99999 UNKNOWN applicaiton exception
[ERROR] 2025-07-25-12:28:15 (PID:1072794, Device:3, RankID:-1) ERR99999 UNKNOWN applicaiton exception
[2025-07-25 12:28:16,773] [WARNING] [lr_schedules.py:686:get_lr] Attempting to get learning rate from scheduler before it has started
[rank4]: Traceback (most recent call last):
[rank4]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank4]: run_only_decoder_deepspeed()
[rank4]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank4]: trainer.train()
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank4]: return inner_training_loop(
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank4]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank4]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank4]: outputs = model(**inputs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank4]: return forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank4]: ret_val = func(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank4]: loss = self.module(*inputs, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank4]: return inner()
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank4]: result = forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank4]: output = func(self, *args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank4]: outputs = self.model(
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank4]: return forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank4]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank4]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank4]: return forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank4]: hidden_states = blk(
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank4]: return super().__call__(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank4]: return forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank4]: hidden_states = hidden_states + self.attn(
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank4]: return self._call_impl(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank4]: return forward_call(*args, **kwargs)
[rank4]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank4]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank4]: RuntimeError: shape '[48, 3, 0, -1]' is invalid for input of size 46080
[rank5]: Traceback (most recent call last):
[rank5]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank5]: run_only_decoder_deepspeed()
[rank5]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank5]: trainer.train()
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank5]: return inner_training_loop(
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank5]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank5]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank5]: outputs = model(**inputs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank5]: ret_val = func(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank5]: loss = self.module(*inputs, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank5]: return inner()
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank5]: result = forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank5]: output = func(self, *args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank5]: outputs = self.model(
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank5]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank5]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank5]: hidden_states = blk(
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank5]: return super().__call__(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank5]: hidden_states = hidden_states + self.attn(
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank5]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank5]: RuntimeError: shape '[48, 3, 0, -1]' is invalid for input of size 46080
[rank7]: Traceback (most recent call last):
[rank7]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank7]: run_only_decoder_deepspeed()
[rank7]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank7]: trainer.train()
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank7]: return inner_training_loop(
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank7]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank7]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank7]: outputs = model(**inputs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank7]: return forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank7]: ret_val = func(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank7]: loss = self.module(*inputs, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank7]: return inner()
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank7]: result = forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank7]: output = func(self, *args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank7]: outputs = self.model(
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank7]: return forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank7]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank7]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank7]: return forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank7]: hidden_states = blk(
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank7]: return super().__call__(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank7]: return forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank7]: hidden_states = hidden_states + self.attn(
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank7]: return self._call_impl(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank7]: return forward_call(*args, **kwargs)
[rank7]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank7]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank7]: RuntimeError: shape '[48, 3, 0, -1]' is invalid for input of size 46080
[rank6]: Traceback (most recent call last):
[rank6]: File "/data/pyproject/icig-ai/cv/train.py", line 159, in <module>
[rank6]: run_only_decoder_deepspeed()
[rank6]: File "/data/pyproject/icig-ai/cv/train.py", line 126, in run_only_decoder_deepspeed
[rank6]: trainer.train()
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank6]: return inner_training_loop(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank6]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3749, in training_step
[rank6]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3836, in compute_loss
[rank6]: outputs = model(**inputs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank6]: return forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank6]: ret_val = func(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2105, in forward
[rank6]: loss = self.module(*inputs, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1857, in _call_impl
[rank6]: return inner()
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1805, in inner
[rank6]: result = forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/utils/generic.py", line 943, in wrapper
[rank6]: output = func(self, *args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1509, in forward
[rank6]: outputs = self.model(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank6]: return forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1250, in forward
[rank6]: image_embeds = self.get_image_features(pixel_values, image_grid_thw)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1200, in get_image_features
[rank6]: image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank6]: return forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 492, in forward
[rank6]: hidden_states = blk(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
[rank6]: return super().__call__(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank6]: return forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 286, in forward
[rank6]: hidden_states = hidden_states + self.attn(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank6]: return self._call_impl(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank6]: return forward_call(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 223, in forward
[rank6]: self.qkv(hidden_states).reshape(seq_length, 3, self.num_heads, -1).permute(1, 0, 2, 3).unbind(0)
[rank6]: RuntimeError: shape '[48, 3, 0, -1]' is invalid for input of size 46080
[ERROR] 2025-07-25-12:28:21 (PID:1072795, Device:4, RankID:-1) ERR99999 UNKNOWN applicaiton exception
``````
However, if tensor parallelism is not activated, there will be no errors
``````
0%| | 0/26583 [00:00<?, ?it/s]`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[rank0]:[W725 12:58:07.811811380 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank1]:[W725 12:58:07.826088880 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[rank3]:[W725 12:58:07.841496280 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[rank2]:[W725 12:58:07.858896210 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank5]:[W725 12:58:07.868805050 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank4]:[W725 12:58:07.873117080 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[rank7]:[W725 12:58:07.897366570 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[rank6]:[W725 12:58:07.959260420 reducer.cpp:1430] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank0]:[W725 12:58:11.085906560 compiler_depend.ts:149] Warning: Failed to find function aclsysGetCANNVersion (function operator())
[rank0]:[W725 12:58:11.086719430 compiler_depend.ts:52] Warning: Version: is invalid. (function operator())
0%| | 6/26583 [00:20<22:47:43, 3.09s/it]
``````
### code
#### train
``````python
def run_only_decoder_deepspeed():
# torch.npu.config.allow_internal_format=False
# torch.npu.set_compile_mode(jit_compile=False)
train_dataset = load_dataset("Obscure-Entropy/ImageCaptioning_EN-HU",split="train[:1%]")
eval_dataset = load_dataset("Obscure-Entropy/ImageCaptioning_EN-HU",split="train[1%:2%]")
model_path = "models/Qwen2.5-VL-3B-Instruct"
output_dir = "outputs"
training_args = TrainingArguments(output_dir=output_dir,
per_device_train_batch_size=4,
per_device_eval_batch_size=32,
num_train_epochs=3,
save_safetensors=True,
deepspeed="DeepSpeedExamples/training/tensor_parallel/configs/ds_config.json")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(model_path)
processor = Qwen2_5_VLProcessor.from_pretrained(model_path)
train_dataset = ImageOnlyDecoderCaptioningDataset(train_dataset, processor)
val_dataset = ImageOnlyDecoderCaptioningDataset(eval_dataset, processor)
trainer = Trainer(model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator=qwen_2_5_collator(processor.tokenizer))
trainer.train()
trainer.save_model(output_dir)
trainer.evaluate()
``````
#### dataset
``````python
class ImageOnlyDecoderCaptioningDataset(Dataset):
def __init__(self,dataset,processor):
self.dataset = dataset
self.processor :Qwen2_5_VLProcessor = processor
self.messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
self.output_kwargs = Qwen2_5_VLProcessorKwargs(size={"shortest_edge": 28 * 28, "longest_edge": 28 * 28 * 4})
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
data = self.dataset[idx]
self.messages.append({"role":"assistant","content":data["en_cap"]})
text = self.processor.apply_chat_template(self.messages, tokenize=False, add_generation_prompt=False)
inputs = self.processor(text=[text], images=[data["img"]],return_tensors="pt",**self.output_kwargs)
labels_tokenize = self.processor.tokenizer(data["en_cap"])["input_ids"]
label_padding_len = len(inputs["input_ids"].tolist()[0])-len(labels_tokenize)
inputs["labels"] = torch.tensor([[-100]*label_padding_len+labels_tokenize])
return inputs
``````
#### data_collactor
``````python
def qwen_2_5_collator(tokenizer: transformers.PreTrainedTokenizer):
def collate_fn(instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
# input_ids, labels, position_ids = tuple(
# [instance[key] for instance in instances]
# for key in ("input_ids", "labels", "position_ids")
# )
# input_ids = [ids.squeeze(0) for ids in input_ids]
input_ids, labels = tuple(
[instance[key] for instance in instances]
for key in ("input_ids", "labels")
)
input_ids = [ids.squeeze(0) for ids in input_ids]
labels = [ids.squeeze(0) for ids in labels]
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(
labels, batch_first=True, padding_value=IGNORE_INDEX
)
# position_ids = pad_and_cat(position_ids)
input_ids = input_ids[:, : tokenizer.model_max_length]
labels = labels[:, : tokenizer.model_max_length]
# position_ids = position_ids[:, : tokenizer.model_max_length]
batch = dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(tokenizer.pad_token_id),
)
images = list(
instance["pixel_values"]
for instance in instances
if "pixel_values" in instance
)
videos = list(
instance["pixel_values_videos"]
for instance in instances
if "pixel_values_videos" in instance
)
if len(images) != 0:
concat_images = torch.cat([image for image in images], dim=0)
grid_thw = [
instance["image_grid_thw"]
for instance in instances
if "image_grid_thw" in instance
]
grid_thw = torch.cat(grid_thw, dim=0)
else:
concat_images = None
grid_thw = None
if len(videos) != 0:
concat_videos = torch.cat([video for video in videos], dim=0)
video_grid_thw = [
instance["video_grid_thw"]
for instance in instances
if "video_grid_thw" in instance
]
video_grid_thw = torch.cat(video_grid_thw, dim=0)
else:
concat_videos = None
video_grid_thw = None
batch["pixel_values"] = concat_images
batch["image_grid_thw"] = grid_thw
batch["pixel_values_videos"] = concat_videos
batch["video_grid_thw"] = video_grid_thw
# batch["position_ids"] = position_ids
# for item in batch.keys():
# if batch[item] is not None:
# print(item, batch[item].shape)
return batch
return collate_fn
``````
#### ds_config
``````json
{
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"total_num_steps": "auto",
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 1,
"gather_16bit_weights_on_model_save": true
},
"tensor_parallel":{
"autotp_size": 4
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 1,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
``````
### Expected behavior
Smooth training | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39804/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39803/comments | https://api.github.com/repos/huggingface/transformers/issues/39803/events | https://github.com/huggingface/transformers/issues/39803 | 3,279,019,857 | I_kwDOCUB6oc7Dcd9R | 39,803 | Memory leak occurred during training qwen-2.5-vl | {
"login": "shaojun0",
"id": 104407395,
"node_id": "U_kgDOBjkhYw",
"avatar_url": "https://avatars.githubusercontent.com/u/104407395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shaojun0",
"html_url": "https://github.com/shaojun0",
"followers_url": "https://api.github.com/users/shaojun0/followers",
"following_url": "https://api.github.com/users/shaojun0/following{/other_user}",
"gists_url": "https://api.github.com/users/shaojun0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shaojun0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shaojun0/subscriptions",
"organizations_url": "https://api.github.com/users/shaojun0/orgs",
"repos_url": "https://api.github.com/users/shaojun0/repos",
"events_url": "https://api.github.com/users/shaojun0/events{/privacy}",
"received_events_url": "https://api.github.com/users/shaojun0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-31T03:40:47 | 2025-09-16T08:02:43 | 2025-09-16T08:02:43 | NONE | null | null | null | null | ### System Info
| key | value |
| -------------------- | ------------- |
| transformers version | 4.53.3 |
| PyTorch version | 2.7.1 |
| PyTorch_npu version | 2.7.1 |
| deepspeed version | 0.17.2 |
| NPU | ascend 910b*8 |
### Who can help?
@zucchini-nlp @ivarflakstad
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I tried to fine-tune Qwen2.5-vl using NPU, but as the number of steps increases, the training time becomes longer and longer (linear growth), eventually leading to out of memory.
``````
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
0%| | 0/3804 [00:00<?, ?it/s]`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
0%|▎ | 6/3804 [00:29<5:04:51, 4.82s/it][rank0]:[W728 03:15:39.212355280 compiler_depend.ts:159] Warning: Warning: Device do not support double dtype now, dtype cast repalce with float. (function operator())
[rank0]:[W728 03:15:39.212766810 compiler_depend.ts:149] Warning: Failed to find function aclsysGetCANNVersion (function operator())
[rank0]:[W728 03:15:39.213567250 compiler_depend.ts:52] Warning: Version: is invalid. (function operator())
4%|███████▋ | 152/3804 [1:10:48<48:59:05, 48.29s/it][rank6]: Traceback (most recent call last):
[rank6]: File "/data/pyproject/icig-ai/cv/train.py", line 163, in <module>
[rank6]: run_only_decoder_deepspeed()
[rank6]: File "/data/pyproject/icig-ai/cv/train.py", line 130, in run_only_decoder_deepspeed
[rank6]: trainer.train()
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
[rank6]: return inner_training_loop(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank6]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/transformers/trainer.py", line 3797, in training_step
[rank6]: self.accelerator.backward(loss, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/accelerate/accelerator.py", line 2545, in backward
[rank6]: self.deepspeed_engine_wrapped.backward(loss, sync_gradients=self.sync_gradients, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 270, in backward
[rank6]: self.engine.backward(loss, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank6]: ret_val = func(*args, **kwargs)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2267, in backward
[rank6]: self._do_optimizer_backward(loss, retain_graph)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2213, in _do_optimizer_backward
[rank6]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2184, in backward
[rank6]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 65, in backward
[rank6]: scaled_loss.backward(retain_graph=retain_graph)
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
[rank6]: torch.autograd.backward(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 353, in backward
[rank6]: _engine_run_backward(
[rank6]: File "/usr/local/python3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 824, in _engine_run_backward
[rank6]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank6]: RuntimeError: NPU out of memory. Tried to allocate 11.28 GiB (NPU 6; 60.96 GiB total capacity; 38.77 GiB already allocated; 38.77 GiB current active; 7.25 GiB free; 51.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
[ERROR] 2025-07-28-04:26:30 (PID:1201180, Device:6, RankID:-1) ERR99999 UNKNOWN applicaiton exception
``````
### code
#### train
```python
def run_only_decoder_deepspeed():
# torch.npu.config.allow_internal_format=False
# torch.npu.set_compile_mode(jit_compile=False)
train_dataset = load_dataset("Obscure-Entropy/ImageCaptioning_EN-HU",split="train[:90%]")
eval_dataset = load_dataset("Obscure-Entropy/ImageCaptioning_EN-HU",split="train[90%:]")
model_path = "models/Qwen2.5-VL-3B-Instruct"
output_dir = "outputs"
deep_speed_path = "DeepSpeedExamples/training/autotuning/hf/dsconfigs/ds_config_z2.json"
training_args = TrainingArguments(output_dir=output_dir,
per_device_train_batch_size=1,
per_device_eval_batch_size=16,
num_train_epochs=3,
save_safetensors=True,
deepspeed=deep_speed_path,
fp16=True,
gradient_accumulation_steps=4,
gradient_checkpointing=True)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(model_path,torch_dtype=torch.float16)
processor = Qwen2_5_VLProcessor.from_pretrained(model_path)
train_dataset = ImageOnlyDecoderCaptioningDataset(train_dataset, processor)
val_dataset = ImageOnlyDecoderCaptioningDataset(eval_dataset, processor)
trainer = Trainer(model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
data_collator=qwen_2_5_collator(processor.tokenizer))
trainer.train()
trainer.save_model(output_dir)
trainer.evaluate()
```
#### dataset
```python
class ImageOnlyDecoderCaptioningDataset(Dataset):
def __init__(self,dataset,processor):
self.dataset = dataset
self.processor :Qwen2_5_VLProcessor = processor
self.messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What is shown in this image?"},
],
},
]
self.output_kwargs = Qwen2_5_VLProcessorKwargs(size={"shortest_edge": 28 * 28, "longest_edge": 28 * 28 * 4})
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
data = self.dataset[idx]
self.messages.append({"role":"assistant","content":data["en_cap"]})
text = self.processor.apply_chat_template(self.messages, tokenize=False, add_generation_prompt=False)
inputs = self.processor(text=[text], images=[data["img"]],return_tensors="pt",**self.output_kwargs)
labels_tokenize = self.processor.tokenizer(data["en_cap"])["input_ids"]
label_padding_len = len(inputs["input_ids"].tolist()[0])-len(labels_tokenize)
inputs["labels"] = torch.tensor([[-100]*label_padding_len+labels_tokenize])
return inputs
```
#### collactor
```python
def qwen_2_5_collator(tokenizer: transformers.PreTrainedTokenizer):
def collate_fn(instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:
# input_ids, labels, position_ids = tuple(
# [instance[key] for instance in instances]
# for key in ("input_ids", "labels", "position_ids")
# )
# input_ids = [ids.squeeze(0) for ids in input_ids]
input_ids, labels = tuple(
[instance[key] for instance in instances]
for key in ("input_ids", "labels")
)
input_ids = [ids.squeeze(0) for ids in input_ids]
labels = [ids.squeeze(0) for ids in labels]
input_ids = torch.nn.utils.rnn.pad_sequence(
input_ids, batch_first=True, padding_value=tokenizer.pad_token_id
)
labels = torch.nn.utils.rnn.pad_sequence(
labels, batch_first=True, padding_value=IGNORE_INDEX
)
# position_ids = pad_and_cat(position_ids)
input_ids = input_ids[:, : tokenizer.model_max_length]
labels = labels[:, : tokenizer.model_max_length]
# position_ids = position_ids[:, : tokenizer.model_max_length]
batch = dict(
input_ids=input_ids,
labels=labels,
attention_mask=input_ids.ne(tokenizer.pad_token_id),
)
images = list(
instance["pixel_values"]
for instance in instances
if "pixel_values" in instance
)
videos = list(
instance["pixel_values_videos"]
for instance in instances
if "pixel_values_videos" in instance
)
if len(images) != 0:
concat_images = torch.cat([image for image in images], dim=0)
grid_thw = [
instance["image_grid_thw"]
for instance in instances
if "image_grid_thw" in instance
]
grid_thw = torch.cat(grid_thw, dim=0)
else:
concat_images = None
grid_thw = None
if len(videos) != 0:
concat_videos = torch.cat([video for video in videos], dim=0)
video_grid_thw = [
instance["video_grid_thw"]
for instance in instances
if "video_grid_thw" in instance
]
video_grid_thw = torch.cat(video_grid_thw, dim=0)
else:
concat_videos = None
video_grid_thw = None
batch["pixel_values"] = concat_images
batch["image_grid_thw"] = grid_thw
batch["pixel_values_videos"] = concat_videos
batch["video_grid_thw"] = video_grid_thw
# batch["position_ids"] = position_ids
# for item in batch.keys():
# if batch[item] is not None:
# print(item, batch[item].shape)
return batch
return collate_fn
```
#### deepspeed config
```json
{
"train_micro_batch_size_per_gpu": "auto",
"zero_optimization": {
"stage": 2
}
}
```
### Expected behavior
Smooth training
---
---
I tried using the 4 * A800 for fine-tuning, but the problem still occurred.
```
root@notebook-1947842820639805441-scnbfowvjz-74208:~/pyproject/quen2_5_train# deepspeed --num_gpus=4 train.py
[2025-08-01 15:21:13,909] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:21:16,982] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:21:17,771] [WARNING] [runner.py:220:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2025-08-01 15:21:17,771] [INFO] [runner.py:610:main] cmd = /opt/conda/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None train.py
[2025-08-01 15:21:22,209] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:21:25,167] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.17.1-1+cuda12.1
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.17.1-1
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NCCL_VERSION=2.17.1-1
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.17.1-1+cuda12.1
[2025-08-01 15:21:25,902] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2025-08-01 15:21:25,903] [INFO] [launch.py:139:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.17.1-1
[2025-08-01 15:21:25,903] [INFO] [launch.py:146:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2025-08-01 15:21:25,903] [INFO] [launch.py:152:main] nnodes=1, num_local_procs=4, node_rank=0
[2025-08-01 15:21:25,903] [INFO] [launch.py:163:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2025-08-01 15:21:25,903] [INFO] [launch.py:164:main] dist_world_size=4
[2025-08-01 15:21:25,903] [INFO] [launch.py:168:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2025-08-01 15:21:25,904] [INFO] [launch.py:256:main] process 8961 spawned with command: ['/opt/conda/bin/python', '-u', 'train.py', '--local_rank=0']
[2025-08-01 15:21:25,904] [INFO] [launch.py:256:main] process 8962 spawned with command: ['/opt/conda/bin/python', '-u', 'train.py', '--local_rank=1']
[2025-08-01 15:21:25,905] [INFO] [launch.py:256:main] process 8963 spawned with command: ['/opt/conda/bin/python', '-u', 'train.py', '--local_rank=2']
[2025-08-01 15:21:25,906] [INFO] [launch.py:256:main] process 8964 spawned with command: ['/opt/conda/bin/python', '-u', 'train.py', '--local_rank=3']
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 147312.14it/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 11100.12files/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 157122.73it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 155184.94it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 163237.78it/s]
Generating train split: 3600000 examples [01:18, 45799.61 examples/s]
Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 32732.48it/s]
Resolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 27920.66it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 130618.46it/s]
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 36/36 [00:00<00:00, 142582.57it/s]
[2025-08-01 15:22:54,049] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:22:54,096] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:22:54,098] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:22:54,101] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-08-01 15:22:55,606] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:22:55,623] [INFO] [comm.py:676:init_distributed] cdb=None
[2025-08-01 15:22:55,673] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:22:55,689] [INFO] [comm.py:676:init_distributed] cdb=None
[2025-08-01 15:22:55,689] [INFO] [comm.py:707:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-08-01 15:22:55,805] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:22:55,821] [INFO] [comm.py:676:init_distributed] cdb=None
[2025-08-01 15:22:55,876] [INFO] [logging.py:107:log_dist] [Rank -1] [TorchCheckpointEngine] Initialized with serialization = False
[2025-08-01 15:22:55,896] [INFO] [comm.py:676:init_distributed] cdb=None
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:13<00:00, 36.96s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:14<00:00, 37.04s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:14<00:00, 37.31s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [01:14<00:00, 37.41s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
Detected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
0%| | 0/6750 [00:00<?, ?it/s]`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
0%| | 2/6750 [00:07<7:18:26, 3.90s/it]发生错误跳过文件
0%| | 6/6750 [00:25<8:52:27, 4.74s/it]发生错误跳过文件
发生错误跳过文件
0%|▍ | 25/6750 [02:19<15:29:11, 8.29s/it]发生错误跳过文件
0%|▌ | 29/6750 [03:05<19:22:27, 10.38s/it]发生错误跳过文件
0%|▌ | 33/6750 [03:51<21:22:09, 11.45s/it]
1%|▌ | 34/6750 [04:03<22:05:59, 11.85s/it][rank2]: Traceback (most recent call last):
[rank2]: File "/root/pyproject/quen2_5_train/train.py", line 77, in <module>
[rank2]: run_only_decoder_deepspeed()
[rank2]: File "/root/pyproject/quen2_5_train/train.py", line 71, in run_only_decoder_deepspeed
[rank2]: trainer.train()
[rank2]: File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2240, in train
[rank2]: return inner_training_loop(
[rank2]: File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2555, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3791, in training_step
[rank2]: self.accelerator.backward(loss, **kwargs)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py", line 2545, in backward
[rank2]: self.deepspeed_engine_wrapped.backward(loss, sync_gradients=self.sync_gradients, **kwargs)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/accelerate/utils/deepspeed.py", line 270, in backward
[rank2]: self.engine.backward(loss, **kwargs)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 20, in wrapped_fn
[rank2]: ret_val = func(*args, **kwargs)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2267, in backward
[rank2]: self._do_optimizer_backward(loss, retain_graph)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2213, in _do_optimizer_backward
[rank2]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 2184, in backward
[rank2]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 65, in backward
[rank2]: scaled_loss.backward(retain_graph=retain_graph)
[rank2]: File "/opt/conda/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
[rank2]: torch.autograd.backward(
[rank2]: File "/opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank2]: _engine_run_backward(
[rank2]: File "/opt/conda/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank2]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.21 GiB. GPU 2 has a total capacity of 79.15 GiB of which 729.31 MiB is free. Process 47906 has 78.43 GiB memory in use. Of the allocated memory 72.42 GiB is allocated by PyTorch, and 5.30 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[2025-08-01 15:28:47,390] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 8961
[2025-08-01 15:28:50,061] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 8962
[2025-08-01 15:28:51,254] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 8963
[2025-08-01 15:28:51,254] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 8964
[2025-08-01 15:28:53,749] [ERROR] [launch.py:325:sigkill_handler] ['/opt/conda/bin/python', '-u', 'train.py', '--local_rank=3'] exits with return code = 1
```
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39803/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39802/comments | https://api.github.com/repos/huggingface/transformers/issues/39802/events | https://github.com/huggingface/transformers/pull/39802 | 3,278,674,325 | PR_kwDOCUB6oc6hcb2T | 39,802 | fix: qwen 25vl rope if item is masked | {
"login": "jeffrey-dot-li",
"id": 46302202,
"node_id": "MDQ6VXNlcjQ2MzAyMjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/46302202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffrey-dot-li",
"html_url": "https://github.com/jeffrey-dot-li",
"followers_url": "https://api.github.com/users/jeffrey-dot-li/followers",
"following_url": "https://api.github.com/users/jeffrey-dot-li/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffrey-dot-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffrey-dot-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffrey-dot-li/subscriptions",
"organizations_url": "https://api.github.com/users/jeffrey-dot-li/orgs",
"repos_url": "https://api.github.com/users/jeffrey-dot-li/repos",
"events_url": "https://api.github.com/users/jeffrey-dot-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffrey-dot-li/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T23:15:30 | 2025-08-22T01:59:00 | 2025-08-22T01:59:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39802",
"html_url": "https://github.com/huggingface/transformers/pull/39802",
"diff_url": "https://github.com/huggingface/transformers/pull/39802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39802.patch",
"merged_at": null
} | # What does this PR do?
In current qwen 2.5 VL implementation, if you mask out one item in a batch, then in the rope calculation you get
```
llm_positions = torch.cat(llm_pos_ids_list, dim=1).reshape(3, -1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: torch.cat(): expected a non-empty list of Tensors
```
This just fixes and returns early on samples that are masked
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "jeffrey-dot-li",
"id": 46302202,
"node_id": "MDQ6VXNlcjQ2MzAyMjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/46302202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffrey-dot-li",
"html_url": "https://github.com/jeffrey-dot-li",
"followers_url": "https://api.github.com/users/jeffrey-dot-li/followers",
"following_url": "https://api.github.com/users/jeffrey-dot-li/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffrey-dot-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffrey-dot-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffrey-dot-li/subscriptions",
"organizations_url": "https://api.github.com/users/jeffrey-dot-li/orgs",
"repos_url": "https://api.github.com/users/jeffrey-dot-li/repos",
"events_url": "https://api.github.com/users/jeffrey-dot-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffrey-dot-li/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39802/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39801/comments | https://api.github.com/repos/huggingface/transformers/issues/39801/events | https://github.com/huggingface/transformers/issues/39801 | 3,278,426,879 | I_kwDOCUB6oc7DaNL_ | 39,801 | ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981 | {
"login": "jpitalopez",
"id": 150071322,
"node_id": "U_kgDOCPHoGg",
"avatar_url": "https://avatars.githubusercontent.com/u/150071322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpitalopez",
"html_url": "https://github.com/jpitalopez",
"followers_url": "https://api.github.com/users/jpitalopez/followers",
"following_url": "https://api.github.com/users/jpitalopez/following{/other_user}",
"gists_url": "https://api.github.com/users/jpitalopez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpitalopez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpitalopez/subscriptions",
"organizations_url": "https://api.github.com/users/jpitalopez/orgs",
"repos_url": "https://api.github.com/users/jpitalopez/repos",
"events_url": "https://api.github.com/users/jpitalopez/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpitalopez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T20:59:45 | 2025-09-07T08:02:42 | 2025-09-07T08:02:42 | NONE | null | null | null | null | ### System Info
_prepare_cache_for_generation
raise ValueError(
ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981
I got this error and i have no clue of how to solve it. I tried different implementations from different people and I always have the same problem.
I used this code: https://mer.vin/2024/11/finetune-llama-3-2-vision-radiology-images/
import os
from unsloth import FastVisionModel
import torch
from datasets import load_dataset
from transformers import TextStreamer
from unsloth import is_bf16_supported
from unsloth.trainer import UnslothVisionDataCollator
from trl import SFTTrainer, SFTConfig
# 1. Load the model
model, tokenizer = FastVisionModel.from_pretrained(
"unsloth/Llama-3.2-11B-Vision-Instruct",
load_in_4bit = True,
use_gradient_checkpointing = "unsloth",
)
model = FastVisionModel.get_peft_model(
model,
finetune_vision_layers = True,
finetune_language_layers = True,
finetune_attention_modules = True,
finetune_mlp_modules = True,
r = 16,
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
# 2. Load the dataset
dataset = load_dataset("unsloth/Radiology_mini", split = "train")
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
def convert_to_conversation(sample):
conversation = [
{ "role": "user",
"content" : [
{"type" : "text", "text" : instruction},
{"type" : "image", "image" : sample["image"]} ]
},
{ "role" : "assistant",
"content" : [
{"type" : "text", "text" : sample["caption"]} ]
},
]
return { "messages" : conversation }
pass
converted_dataset = [convert_to_conversation(sample) for sample in dataset]
# 3. Before training
FastVisionModel.for_inference(model)
image = dataset[0]["image"]
instruction = "You are an expert radiographer. Describe accurately what you see in this image."
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": instruction}
]}
]
input_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True)
inputs = tokenizer(
image,
input_text,
add_special_tokens = False,
return_tensors = "pt",
).to("cuda")
print("\nBefore training:\n")
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,
use_cache = True, temperature = 1.5, min_p = 0.1)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
pip install unsloth
export HF_TOKEN=xxxxxxxxxxxxx
### Expected behavior
Start fine-tuning | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39801/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39800/comments | https://api.github.com/repos/huggingface/transformers/issues/39800/events | https://github.com/huggingface/transformers/pull/39800 | 3,278,400,051 | PR_kwDOCUB6oc6hbif3 | 39,800 | Add EdgeTAM | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6886428489,
"node_id": "LA_kwDOCUB6oc8AAAABmnaPSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/run-slow",
"name": "run-slow",
"color": "E1D519",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-30T20:45:40 | 2025-09-29T15:54:55 | 2025-09-29T15:54:54 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39800",
"html_url": "https://github.com/huggingface/transformers/pull/39800",
"diff_url": "https://github.com/huggingface/transformers/pull/39800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39800.patch",
"merged_at": "2025-09-29T15:54:54"
} | # What does this PR do?
Add [EdgeTAM](https://github.com/facebookresearch/EdgeTAM)
Largely based on Sam2 with modular | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39800/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39799/comments | https://api.github.com/repos/huggingface/transformers/issues/39799/events | https://github.com/huggingface/transformers/pull/39799 | 3,278,108,033 | PR_kwDOCUB6oc6hajPI | 39,799 | Mistral: Add support for interleaved attention | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T18:40:28 | 2025-08-12T12:31:14 | 2025-08-12T12:31:14 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39799",
"html_url": "https://github.com/huggingface/transformers/pull/39799",
"diff_url": "https://github.com/huggingface/transformers/pull/39799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39799.patch",
"merged_at": null
} | Adds support for interleaved attention masks to the Mistral model. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39799/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39798/comments | https://api.github.com/repos/huggingface/transformers/issues/39798/events | https://github.com/huggingface/transformers/issues/39798 | 3,278,000,102 | I_kwDOCUB6oc7DYk_m | 39,798 | You current version of `autoawq` does not support module quantization skipping, please upgrade `autoawq` package to at least 0.1.8. | {
"login": "bi1101",
"id": 15710921,
"node_id": "MDQ6VXNlcjE1NzEwOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/15710921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bi1101",
"html_url": "https://github.com/bi1101",
"followers_url": "https://api.github.com/users/bi1101/followers",
"following_url": "https://api.github.com/users/bi1101/following{/other_user}",
"gists_url": "https://api.github.com/users/bi1101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bi1101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bi1101/subscriptions",
"organizations_url": "https://api.github.com/users/bi1101/orgs",
"repos_url": "https://api.github.com/users/bi1101/repos",
"events_url": "https://api.github.com/users/bi1101/events{/privacy}",
"received_events_url": "https://api.github.com/users/bi1101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T18:00:10 | 2025-08-26T13:31:00 | 2025-08-26T13:31:00 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cpu (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@SunMarc @MekkCyber
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
model_id = "QuixiAI/Qwen3-30B-A3B-AWQ"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="flash_attention_2",
)
```
### Expected behavior
The model loads. | {
"login": "bi1101",
"id": 15710921,
"node_id": "MDQ6VXNlcjE1NzEwOTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/15710921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bi1101",
"html_url": "https://github.com/bi1101",
"followers_url": "https://api.github.com/users/bi1101/followers",
"following_url": "https://api.github.com/users/bi1101/following{/other_user}",
"gists_url": "https://api.github.com/users/bi1101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bi1101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bi1101/subscriptions",
"organizations_url": "https://api.github.com/users/bi1101/orgs",
"repos_url": "https://api.github.com/users/bi1101/repos",
"events_url": "https://api.github.com/users/bi1101/events{/privacy}",
"received_events_url": "https://api.github.com/users/bi1101/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39798/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39797/comments | https://api.github.com/repos/huggingface/transformers/issues/39797/events | https://github.com/huggingface/transformers/pull/39797 | 3,277,761,456 | PR_kwDOCUB6oc6hZYay | 39,797 | [core] Refactor the Cache logic to make it simpler and more general | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T16:38:03 | 2025-08-08T12:47:24 | 2025-08-08T12:47:22 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39797",
"html_url": "https://github.com/huggingface/transformers/pull/39797",
"diff_url": "https://github.com/huggingface/transformers/pull/39797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39797.patch",
"merged_at": "2025-08-08T12:47:22"
} | # What does this PR do?
Big simplifications everywhere, but most notably:
- all caches are initialized lazily -> no more issues of devices with device_map, which would lead to breaking the Static dynamo addresses due to device movement + no issue of dimensions with TP + much simpler to prepare for `generate` (all properties are derived at first `update` time) -> simpler and more efficient (no device copies)
- `early_initialization` provides a way to init everything before `update` is called -> this is needed for `export` as we can't trace correctly if initialization is lazy
- removed CacheProcessor -> QuantizedProcessor should be QuantizedLayers instead, and offloading alone does not justify the Processor boilerplate -> much easier to have offloading as part of the Layer and Cache themselves (it's also much more robust now regarding devices)
- Hybrid and HybridChunked now check for `chunk_attention_size` correctly again (it was lost before which would break Llama4)
- code much easier to follow and understand -> more maintainable
- this is also a big step towards completely removing the `cache_position`, which would simplify the library a lot, and will come in a follow-up PR | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39797/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39797/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39796/comments | https://api.github.com/repos/huggingface/transformers/issues/39796/events | https://github.com/huggingface/transformers/pull/39796 | 3,277,686,541 | PR_kwDOCUB6oc6hZH06 | 39,796 | [pipelines] text-to-audio pipeline standardization | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-30T16:16:03 | 2025-07-31T17:37:52 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39796",
"html_url": "https://github.com/huggingface/transformers/pull/39796",
"diff_url": "https://github.com/huggingface/transformers/pull/39796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39796.patch",
"merged_at": null
} | # What does this PR do?
⚠️ TODO before merging: after settling on the design, update pipeline usage in model docs accordingly.
This PR standardizes `text-to-audio` such that the following lines work with most* text-to-audio models:
```py
from transformers import pipeline
synthesiser = pipeline("text-to-audio", "facebook/musicgen-large")
music = synthesiser("A low-fi song with a strong bassline")
synthesiser.save_audio(music, "test.wav")
```
On most* models where voice control is possible, it is possible to control it through the `voice` pipeline argument. The valid values for `voice` are model-dependent, and this argument is documented accordingly
```py
from transformers import pipeline
tts_pipeline = pipeline("text-to-audio", "sesame/csm-1b")
audio = tts_pipeline("I just got bamboozled by my cat.", voice="1")
tts_pipeline.save_audio(audio, "test.wav")
```
## Core changes
Prior to this PR, recent models with `text-to-audio` capabilities had no pipeline support (e.g. CSM, Dia, Qwen2.5 Omni). There was also not a standardized way to control the voice, if the model generates speech.
With this PR, `TextToAudioPipeline`:
- Uses a `processor` whenever possible, automatically (as opposed to needing a flag to control it);
- Takes a `voice` argument, which a few models can use out of the box. Whether a model can take `voice` in the pipeline is specified by properties of the model (if future models have the same properties, they will also have `voice` support);
- Standardizes outputs: ALL models using the pipeline will return `{"audio": <np.array with shape (audio_channels, sequence_length)>, "sampling_rate": <int>}`. Different models return different array formats, and as a result we can see different saving scripts in the model cards -- the pipeline standardizes it;
- Adds a function to save the audio, for convenience. This way, users don't need to learn about `soundfile` or alternatives. Uses the processor's `save_audio` whenever it is available.
## Model support
Models with out-of-the-box pipeline support:
- TTS Models with `voice` support:
- CSM
- Qwen2.5 Omni
- Bark
- TTS Models w/o `voice` support:
- Dia -- voice is set in the prompt; needs chat templates? (we can hardcode it in the pipeline, though 🤔 )
- FastSpeech2Conformer (model has no voice control)
- SeamlessM4T and variants (model has no voice control)
- Vits (model has no voice control)
- TTA Models:
- Musicgen and variants
Models that have special requirements:
- SpeechT5 (requires `speaker_embeddings` argument; we could hide the complexity and accept `voice: int`, but the voice dataset most commonly used to pull the embeddings from isn't compatible with `datasets==4.0.0` 💔 )
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39796/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39795/comments | https://api.github.com/repos/huggingface/transformers/issues/39795/events | https://github.com/huggingface/transformers/issues/39795 | 3,277,623,478 | I_kwDOCUB6oc7DXJC2 | 39,795 | Regression - High memory usage when using transformers model with FSDP + LoRA | {
"login": "romitjain",
"id": 11757603,
"node_id": "MDQ6VXNlcjExNzU3NjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11757603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/romitjain",
"html_url": "https://github.com/romitjain",
"followers_url": "https://api.github.com/users/romitjain/followers",
"following_url": "https://api.github.com/users/romitjain/following{/other_user}",
"gists_url": "https://api.github.com/users/romitjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/romitjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/romitjain/subscriptions",
"organizations_url": "https://api.github.com/users/romitjain/orgs",
"repos_url": "https://api.github.com/users/romitjain/repos",
"events_url": "https://api.github.com/users/romitjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/romitjain/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T15:54:44 | 2025-08-19T09:58:06 | 2025-08-19T09:58:06 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Yes, FSDP with accelerate
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
`sft.py`
```python
import torch
from accelerate import Accelerator
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
from peft.utils.other import fsdp_auto_wrap_policy
def main():
model_name = "ibm-granite/granite-8b-code-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
)
dummy_input = tokenizer("This is a test sentence.", return_tensors="pt")
accelerator = Accelerator()
# if accelerator.is_main_process:
# torch.cuda.memory._record_memory_history(max_entries=100000)
peft_config = LoraConfig(
r=4,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
target_modules=["q_proj", "v_proj"]
)
model = get_peft_model(model, peft_config)
fsdp_plugin = accelerator.state.fsdp_plugin
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(model) # type: ignore
if accelerator.is_main_process:
model.print_trainable_parameters()
model = accelerator.prepare(model)
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5)
optimizer = accelerator.prepare(optimizer)
model.train()
torch.cuda.empty_cache()
accelerator.print(f"Memory allocated after setup: {torch.cuda.memory_allocated() / 1e9:.2f} GB")
outputs = model(**dummy_input, labels=dummy_input["input_ids"])
loss = outputs.loss
accelerator.backward(loss)
optimizer.step()
optimizer.zero_grad()
peak_memory = torch.cuda.max_memory_allocated() / 1e9
accelerator.print(f"Peak memory during training step: {peak_memory:.2f} GB")
accelerator.wait_for_everyone()
accelerator.print("Debug script finished successfully.")
# if accelerator.is_main_process:
# torch.cuda.memory._dump_snapshot("profile_449.pkl")
# torch.cuda.memory._record_memory_history(enabled=None)
if __name__ == "__main__":
"""
accelerate launch --config_file fsdp.yaml -m sft
"""
main()
```
`fsdp.yaml`
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_cpu_ram_efficient_loading: false
fsdp_sync_module_states: true
fsdp_use_orig_params: true
mixed_precision: 'no'
machine_rank: 0
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
```
Run
```
accelerate launch --config_file fsdp.yaml -m sft
```
### Expected behavior
When I am doing LoRA fine-tuning with FSDP, I am seeing a huge memory usage compared to transformers v4.49.0. This issue is specific to versions including 4.50.0 and above. For example,
For 4 GPUs, I see the following memory usage on `transformers==4.49.0`
```
Memory allocated after setup: 4.03 GB
Peak memory during training step: 5.36 GB
```
vs when I am using any higher version `transformers==4.54.0`
```
Memory allocated after setup: 4.03 GB
Peak memory during training step: 20.16 GB
```
The peak memory usage is 4x.
Keeping all other library versions constant, the bug only appears when upgrading transformers to any version above 4.49.0. That's the reason I have raised the bug here and not in accelerate. Downgrading to `transformers==4.49.0` fixes the issue.
The issue ends here, but I will provide some of my findings in case it is helpful
1. I was able to reproduce this issue in other Llama-based models, too.
2. The bug only appears with FSDP + LoRA. Single GPU jobs don't seem to have the bug.
3. I have already tried the solution provided here: https://github.com/huggingface/accelerate/issues/3474 and it does not solve the issue
4. The memory explosion happens during the backward pass, specifically at: `accelerator.backward(loss)`
5. Looking at the memory profiling results, it _seems_ like all attention heads (Q, V) are somehow treated as trainable and the memory is reserved for their optimizer states which is leading to this 4x spike. I am also attaching the photos from the memory profiling.
6. For fsdp config, I have tried both values of - `fsdp_cpu_ram_efficient_loading`, `fsdp_use_orig_params`, with and without setting `fsdp_transformer_layer_cls_to_wrap`
<img width="2288" height="622" alt="Image" src="https://github.com/user-attachments/assets/4350f0bd-8819-436a-8010-722fb24220ff" />
<img width="3087" height="672" alt="Image" src="https://github.com/user-attachments/assets/63975df9-cf41-4419-acfe-bb1b34a16821" /> | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39795/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39794/comments | https://api.github.com/repos/huggingface/transformers/issues/39794/events | https://github.com/huggingface/transformers/pull/39794 | 3,277,602,470 | PR_kwDOCUB6oc6hY1x2 | 39,794 | Fix ProphetNet forward to handle tuple encoder_outputs | {
"login": "Abdennacer-Badaoui",
"id": 106801897,
"node_id": "U_kgDOBl2q6Q",
"avatar_url": "https://avatars.githubusercontent.com/u/106801897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdennacer-Badaoui",
"html_url": "https://github.com/Abdennacer-Badaoui",
"followers_url": "https://api.github.com/users/Abdennacer-Badaoui/followers",
"following_url": "https://api.github.com/users/Abdennacer-Badaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdennacer-Badaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdennacer-Badaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdennacer-Badaoui/subscriptions",
"organizations_url": "https://api.github.com/users/Abdennacer-Badaoui/orgs",
"repos_url": "https://api.github.com/users/Abdennacer-Badaoui/repos",
"events_url": "https://api.github.com/users/Abdennacer-Badaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdennacer-Badaoui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-30T15:48:51 | 2025-09-15T14:02:05 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39794",
"html_url": "https://github.com/huggingface/transformers/pull/39794",
"diff_url": "https://github.com/huggingface/transformers/pull/39794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39794.patch",
"merged_at": null
} |
This PR fixes a bug in ProphetNet where passing `encoder_outputs` as a tuple to `forward()` would raise:
`AttributeError: 'tuple' object has no attribute 'last_hidden_state'`
This happens because the `forward()` method assumes `encoder_outputs` is a `BaseModelOutput` when provided manually, while the docstring says it can be a tuple.
**Changes:**
Added a check in `forward()`:
Convert tuple `encoder_outputs` to `BaseModelOutput` format before passing to `ProphetNetSeq2SeqModelOutput`.
```python
if isinstance(encoder_outputs, tuple):
encoder_outputs = BaseModelOutput(
last_hidden_state=encoder_outputs[0],
hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
)
```
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39794/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39793/comments | https://api.github.com/repos/huggingface/transformers/issues/39793/events | https://github.com/huggingface/transformers/pull/39793 | 3,277,407,384 | PR_kwDOCUB6oc6hYK4E | 39,793 | Fix DAC conversion script | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-30T14:48:55 | 2025-08-20T14:59:41 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39793",
"html_url": "https://github.com/huggingface/transformers/pull/39793",
"diff_url": "https://github.com/huggingface/transformers/pull/39793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39793.patch",
"merged_at": null
} | # What does this PR do
1. Fix DAC conversion:
- Most notably, performing weight norm removal on GPU instead of on CPU (otherwise get differences for layers with weight norm when applying models on GPU)
- Missing feature extractor parameters
- Correctly casting sampling rate
2. More consistent add/remove weight norm functions
3. Update explanation of high tolerances during testing. We now know it comes from weight norm removal on CPU (instead of GPU) and different implementations of Snake1d (their version uses JIT). Nevertheless, we stick with current models on the [Hub](https://huggingface.co/descript), as differences are minimal.
Reproducer to show weight norm difference when doing weight removal on a different device: https://gist.github.com/ebezzam/c83f186dcfeaab8cac040c960eb474cd | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39793/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39792/comments | https://api.github.com/repos/huggingface/transformers/issues/39792/events | https://github.com/huggingface/transformers/pull/39792 | 3,277,173,730 | PR_kwDOCUB6oc6hXWpD | 39,792 | Served models handle with nested content | {
"login": "jakeret",
"id": 11830719,
"node_id": "MDQ6VXNlcjExODMwNzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11830719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakeret",
"html_url": "https://github.com/jakeret",
"followers_url": "https://api.github.com/users/jakeret/followers",
"following_url": "https://api.github.com/users/jakeret/following{/other_user}",
"gists_url": "https://api.github.com/users/jakeret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakeret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakeret/subscriptions",
"organizations_url": "https://api.github.com/users/jakeret/orgs",
"repos_url": "https://api.github.com/users/jakeret/repos",
"events_url": "https://api.github.com/users/jakeret/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakeret/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-30T13:54:56 | 2025-08-05T06:40:02 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39792",
"html_url": "https://github.com/huggingface/transformers/pull/39792",
"diff_url": "https://github.com/huggingface/transformers/pull/39792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39792.patch",
"merged_at": null
} | # What does this PR do?
This PR fixes an issue in the `transformers serve` functionality where the server could not handle chat messages with nested content (e.g., when the `content` field in a message was a list instead of a string or dictionary). The unhandled `TypeError` caused the server to crash instead of processing the request or returning a meaningful error message.
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/39791
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39792/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39791/comments | https://api.github.com/repos/huggingface/transformers/issues/39791/events | https://github.com/huggingface/transformers/issues/39791 | 3,277,159,862 | I_kwDOCUB6oc7DVX22 | 39,791 | `transformers serve` Fails to Handle Messages with Nested Content | {
"login": "jakeret",
"id": 11830719,
"node_id": "MDQ6VXNlcjExODMwNzE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11830719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakeret",
"html_url": "https://github.com/jakeret",
"followers_url": "https://api.github.com/users/jakeret/followers",
"following_url": "https://api.github.com/users/jakeret/following{/other_user}",
"gists_url": "https://api.github.com/users/jakeret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakeret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakeret/subscriptions",
"organizations_url": "https://api.github.com/users/jakeret/orgs",
"repos_url": "https://api.github.com/users/jakeret/repos",
"events_url": "https://api.github.com/users/jakeret/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakeret/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T13:51:26 | 2025-08-13T10:26:20 | 2025-08-13T10:26:20 | NONE | null | null | null | null | ### System Info
## Bug Description
When using the `transformers serve` command to serve a model, it fails to process chat messages containing nested content, raising a `TypeError`. Specifically, when integrating the served model endpoint with a chat interface such as Gradio, and the messages sent have a nested structure (e.g., lists instead of strings or dictionaries as expected), the server encounters an unhandled exception. This results in the inability to properly process the incoming request.
The issue stems from an assumption in the code that the `content` field of each message is either a string or a dictionary. However, if `content` is a list (as might occur with certain chat interfaces or user-provided inputs), the server attempts to access it using a string key (`message["content"]["text"]`), which leads to the following error:
````
TypeError: list indices must be integers or slices, not str
````
## Environment Information
- transformers version: 4.55.0.dev0
- Platform: macOS-15.5-x86_64-i386-64bit-Mach-O
- Python version: 3.13.5
- Gradio version: 5.38.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Steps to Reproduce
1. Serve a model using `transformers serve`.
The server will be available at http://localhost:8000/v1/.
2. Use Gradio to launch a chat interface connected to the endpoint:
```python
import gradio as gr
gr.load_chat('http://localhost:8000/v1/', model='Qwen/Qwen2.5-0.5B-Instruct', token='').launch()
````
3. Send a message where the content field is nested (e.g., a list). For example:
```json
{"role": "user", "content": [{"text": "Can you help me write tests?", "type": "text"}]}
```
4. Observe the `TypeError` raised by the server.
### Expected behavior
## Expected Behaviour
The server should either:
1. Handle cases where content is a list, or
2. Gracefully return a meaningful error message to the client indicating that the input structure is invalid, rather than raising an internal exception.
## Actual Behaviour
The server crashes with the following traceback:
```python
File "/path/to/transformers/commands/serving.p", line 828, in get_processor_inputs_from_inbound_messages
content = message["content"] if isinstance(message["content"], str) else message["content"]["text"]
~~~~~~~~~~~~~~~~~~^^^^^^^^
TypeError: list indices must be integers or slices, not str
```
| {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39791/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39790/comments | https://api.github.com/repos/huggingface/transformers/issues/39790/events | https://github.com/huggingface/transformers/pull/39790 | 3,276,830,325 | PR_kwDOCUB6oc6hWKRL | 39,790 | Fix pil dependency torch extra | {
"login": "notkisk",
"id": 107971634,
"node_id": "U_kgDOBm-EMg",
"avatar_url": "https://avatars.githubusercontent.com/u/107971634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notkisk",
"html_url": "https://github.com/notkisk",
"followers_url": "https://api.github.com/users/notkisk/followers",
"following_url": "https://api.github.com/users/notkisk/following{/other_user}",
"gists_url": "https://api.github.com/users/notkisk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notkisk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notkisk/subscriptions",
"organizations_url": "https://api.github.com/users/notkisk/orgs",
"repos_url": "https://api.github.com/users/notkisk/repos",
"events_url": "https://api.github.com/users/notkisk/events{/privacy}",
"received_events_url": "https://api.github.com/users/notkisk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T12:21:28 | 2025-10-14T14:04:46 | 2025-10-14T14:04:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39790",
"html_url": "https://github.com/huggingface/transformers/pull/39790",
"diff_url": "https://github.com/huggingface/transformers/pull/39790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39790.patch",
"merged_at": null
} | ## What does this PR do?
Fixes [[#39779](https://github.com/huggingface/transformers/issues/39779)]:
`ModuleNotFoundError: No module named 'PIL'` when running `transformers env` (or any CLI command) after installing `transformers[torch]` in a fresh environment.
---
## Problem
This error occurs because:
1. The `torch` extra in `setup.py` only includes `torch` and `accelerate`.
2. CLI commands eventually import `PIL` through the following chain:
`transformers_cli.py → chat.py → serving.py → from PIL import Image`
3. `Pillow` is only included in the `vision` extra, not in the `torch` extra.
**Reproduction steps:**
```bash
docker run --rm -it python:3.13-slim-bookworm bash -c "python3 -m pip install transformers[torch]; transformers env"
```
---
## Solution
Added `Pillow` to the `torch` extra in `setup.py`:
<details>
<summary>Diff</summary>
```python
# Before
extras["torch"] = deps_list("torch", "accelerate")
# After
extras["torch"] = deps_list("torch", "accelerate", "Pillow")
```
</details>
This ensures that users who install with `pip install transformers[torch]` get a fully functional CLI environment without encountering missing dependencies.
---
## Testing
* ✅ Reproduced the original error in a fresh install
* ✅ Verified fix: `pip install transformers[torch]` now includes Pillow
* ✅ Confirmed `transformers env` works without errors
* ✅ No breaking changes to existing functionality
---
## Checklist (Before submitting)
* [x] Fixes a packaging bug (dependency)
* [x] Read the [[Pull Request section of the contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)
* [x] Linked to related issue: [[#39779](https://github.com/huggingface/transformers/issues/39779)](https://github.com/huggingface/transformers/issues/39779)
* [x] No documentation updates needed
* [x] No new tests needed (existing coverage is sufficient)
---
## Who can review?
@ArthurZucker @SunMarc – This is a small but important fix to the `torch` extra dependencies that prevents CLI errors in fresh installs.
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39790/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39789/comments | https://api.github.com/repos/huggingface/transformers/issues/39789/events | https://github.com/huggingface/transformers/issues/39789 | 3,276,791,834 | I_kwDOCUB6oc7DT-Aa | 39,789 | ViTPose+ models post processing doest not work for `dataset_index : 5` | {
"login": "testdummyvt",
"id": 190604764,
"node_id": "U_kgDOC1xl3A",
"avatar_url": "https://avatars.githubusercontent.com/u/190604764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/testdummyvt",
"html_url": "https://github.com/testdummyvt",
"followers_url": "https://api.github.com/users/testdummyvt/followers",
"following_url": "https://api.github.com/users/testdummyvt/following{/other_user}",
"gists_url": "https://api.github.com/users/testdummyvt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/testdummyvt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/testdummyvt/subscriptions",
"organizations_url": "https://api.github.com/users/testdummyvt/orgs",
"repos_url": "https://api.github.com/users/testdummyvt/repos",
"events_url": "https://api.github.com/users/testdummyvt/events{/privacy}",
"received_events_url": "https://api.github.com/users/testdummyvt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T12:11:44 | 2025-09-07T08:02:44 | 2025-09-07T08:02:44 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.11.12
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@qubvel
When I change the `inputs["dataset_index"] = torch.tensor([0], device=device)` to `inputs["dataset_index"] = torch.tensor([5], device=device)`
The post-processing fails to process the keypoints correctly.
When I keep `dataset_index = 0`, I get the following results:
<img width="640" height="426" alt="Image" src="https://github.com/user-attachments/assets/e1d3dbe8-f3f7-4340-ab98-b5c13a03fbfc" />
But when I set `dataset_index = 5`, I get the following results:
<img width="640" height="426" alt="Image" src="https://github.com/user-attachments/assets/41efd8c1-8a28-4bc1-8610-5826266f6e50" />
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
[From Usage Tips](https://huggingface.co/docs/transformers/en/model_doc/vitpose#usage-tips)
I added `inputs["dataset_index"] = torch.tensor([5], device=device)` in the official example just after
`inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)` line.
### Expected behavior
With dataset_index = 5, which is the cocowholebody dataset, we need to get `133` keypoints. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39789/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39788/comments | https://api.github.com/repos/huggingface/transformers/issues/39788/events | https://github.com/huggingface/transformers/pull/39788 | 3,276,728,712 | PR_kwDOCUB6oc6hVzc9 | 39,788 | Fix re-compilations for cross attention cache | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T11:53:37 | 2025-07-30T12:52:03 | 2025-07-30T12:52:03 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39788",
"html_url": "https://github.com/huggingface/transformers/pull/39788",
"diff_url": "https://github.com/huggingface/transformers/pull/39788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39788.patch",
"merged_at": "2025-07-30T12:52:03"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39774.
As per title, if we are using the legacy `cache.key_cache[layer_idx]` a warning is emitted and fullgraph compilation breaks. This PR makes sure no warning are raised when using the models in core library | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39788/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39787/comments | https://api.github.com/repos/huggingface/transformers/issues/39787/events | https://github.com/huggingface/transformers/issues/39787 | 3,276,709,700 | I_kwDOCUB6oc7DTp9E | 39,787 | "CSM audio generation lacks reliable EOS: does not generate all-zero frames → never stops early" | {
"login": "sergiuxorga",
"id": 205925702,
"node_id": "U_kgDODEYtRg",
"avatar_url": "https://avatars.githubusercontent.com/u/205925702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergiuxorga",
"html_url": "https://github.com/sergiuxorga",
"followers_url": "https://api.github.com/users/sergiuxorga/followers",
"following_url": "https://api.github.com/users/sergiuxorga/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiuxorga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergiuxorga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiuxorga/subscriptions",
"organizations_url": "https://api.github.com/users/sergiuxorga/orgs",
"repos_url": "https://api.github.com/users/sergiuxorga/repos",
"events_url": "https://api.github.com/users/sergiuxorga/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergiuxorga/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T11:47:19 | 2025-10-06T17:07:05 | 2025-10-06T17:07:05 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.18
- Huggingface_hub version: 0.34.2
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA H100 PCIe
When using `CsmForConditionalGeneration.generate(...)`, the model often fails to stop early—even if audio becomes silent or noise-level—, which I think is because the model never emits clean **all-zero frames**.
As a result, generation continues until `max_new_tokens` is exhausted, often producing long tails of silent or noisy output.
### Who can help?
@eustlb
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import CsmForConditionalGeneration, AutoProcessor
model_id = "sesame/csm-1b"
device = "cuda" if torch.cuda.is_available() else "cpu"
# load the model and the processor
processor = AutoProcessor.from_pretrained(model_id)
model = CsmForConditionalGeneration.from_pretrained(model_id, device_map=device)
# prepare the inputs
text = "[0]The past is just a story we tell ourselves." # `[0]` for speaker id 0
inputs = processor(text, add_special_tokens=True).to(device)
# another equivalent way to prepare the inputs
conversation = [
{"role": "0", "content": [{"type": "text", "text": "The past is just a story we tell ourselves."}]},
]
inputs = processor.apply_chat_template(
conversation,
tokenize=True,
return_dict=True,
).to(device)
# infer the model
audio = model.generate(**inputs, output_audio=True)
processor.save_audio(audio, "example_without_context.wav")
```
### Expected behavior
The model should generate an explicit zero token to signal end of speech. | {
"login": "eustlb",
"id": 94853470,
"node_id": "U_kgDOBadZXg",
"avatar_url": "https://avatars.githubusercontent.com/u/94853470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eustlb",
"html_url": "https://github.com/eustlb",
"followers_url": "https://api.github.com/users/eustlb/followers",
"following_url": "https://api.github.com/users/eustlb/following{/other_user}",
"gists_url": "https://api.github.com/users/eustlb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eustlb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eustlb/subscriptions",
"organizations_url": "https://api.github.com/users/eustlb/orgs",
"repos_url": "https://api.github.com/users/eustlb/repos",
"events_url": "https://api.github.com/users/eustlb/events{/privacy}",
"received_events_url": "https://api.github.com/users/eustlb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39787/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39786/comments | https://api.github.com/repos/huggingface/transformers/issues/39786/events | https://github.com/huggingface/transformers/pull/39786 | 3,276,648,402 | PR_kwDOCUB6oc6hVhww | 39,786 | Remove super call from EncoderDecoderCache init | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T11:23:40 | 2025-08-05T10:09:29 | 2025-08-05T10:09:29 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39786",
"html_url": "https://github.com/huggingface/transformers/pull/39786",
"diff_url": "https://github.com/huggingface/transformers/pull/39786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39786.patch",
"merged_at": null
} | `EncoderDecoderCache` is a wrapper around two caches. Thus, it should not call super() to initialize `self.layers` or `self.layer_classes`.
This can create bugs since other parts of the code might try to access those attributes.
The call was added on https://github.com/huggingface/transformers/pull/39590 | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39786/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39785/comments | https://api.github.com/repos/huggingface/transformers/issues/39785/events | https://github.com/huggingface/transformers/pull/39785 | 3,276,480,451 | PR_kwDOCUB6oc6hU8vO | 39,785 | fix mllama integration tests | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-30T10:21:46 | 2025-08-19T08:05:12 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39785",
"html_url": "https://github.com/huggingface/transformers/pull/39785",
"diff_url": "https://github.com/huggingface/transformers/pull/39785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39785.patch",
"merged_at": null
} | fix mllama integration tests
missing encoder states to intermediate states | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39785/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39784/comments | https://api.github.com/repos/huggingface/transformers/issues/39784/events | https://github.com/huggingface/transformers/pull/39784 | 3,276,450,461 | PR_kwDOCUB6oc6hU2JX | 39,784 | Mllama new outputs - fixing integration tests | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T10:10:57 | 2025-07-30T10:24:54 | 2025-07-30T10:20:11 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39784",
"html_url": "https://github.com/huggingface/transformers/pull/39784",
"diff_url": "https://github.com/huggingface/transformers/pull/39784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39784.patch",
"merged_at": null
} | fixing integration tests! | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39784/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39783/comments | https://api.github.com/repos/huggingface/transformers/issues/39783/events | https://github.com/huggingface/transformers/pull/39783 | 3,276,279,261 | PR_kwDOCUB6oc6hUQ9d | 39,783 | more info in `model_results.json` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T09:13:16 | 2025-07-30T09:43:12 | 2025-07-30T09:43:10 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39783",
"html_url": "https://github.com/huggingface/transformers/pull/39783",
"diff_url": "https://github.com/huggingface/transformers/pull/39783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39783.patch",
"merged_at": "2025-07-30T09:43:10"
} | # What does this PR do?
Add `skipped` and `errors` counts | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39783/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39782/comments | https://api.github.com/repos/huggingface/transformers/issues/39782/events | https://github.com/huggingface/transformers/pull/39782 | 3,276,276,756 | PR_kwDOCUB6oc6hUQci | 39,782 | ⚠️⚠️ Use `dtype` instead of `torch_dtype` everywhere! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T09:12:27 | 2025-08-22T10:34:18 | 2025-08-22T10:34:17 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39782",
"html_url": "https://github.com/huggingface/transformers/pull/39782",
"diff_url": "https://github.com/huggingface/transformers/pull/39782.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39782.patch",
"merged_at": "2025-08-22T10:34:17"
} | # What does this PR do?
As per the title! Now that we are officially all-in on PyTorch, let's update the argument from `torch_dtype` to a simple `dtype` to be coherent with PyTorch and simplify!
Of course, it fully keeps BC for `torch_dtype` everywhere (with added tests to make sure), but it's already updated everywhere in the lib to simplify future removal | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39782/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39781/comments | https://api.github.com/repos/huggingface/transformers/issues/39781/events | https://github.com/huggingface/transformers/pull/39781 | 3,276,216,835 | PR_kwDOCUB6oc6hUDg6 | 39,781 | Simplify conditional code | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T08:52:10 | 2025-07-30T12:50:41 | 2025-07-30T12:32:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39781",
"html_url": "https://github.com/huggingface/transformers/pull/39781",
"diff_url": "https://github.com/huggingface/transformers/pull/39781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39781.patch",
"merged_at": "2025-07-30T12:32:11"
} | # What does this PR do?
This PR simplifies more `if` conditions and dict `get` code. The changes are part of efforts to finally enable `SIM` checks in `ruff`. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39781/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39780/comments | https://api.github.com/repos/huggingface/transformers/issues/39780/events | https://github.com/huggingface/transformers/issues/39780 | 3,276,206,518 | I_kwDOCUB6oc7DRvG2 | 39,780 | pip install 'transformers[torch]' pulls nvidia dependencies | {
"login": "marcindulak",
"id": 3178318,
"node_id": "MDQ6VXNlcjMxNzgzMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3178318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcindulak",
"html_url": "https://github.com/marcindulak",
"followers_url": "https://api.github.com/users/marcindulak/followers",
"following_url": "https://api.github.com/users/marcindulak/following{/other_user}",
"gists_url": "https://api.github.com/users/marcindulak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcindulak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcindulak/subscriptions",
"organizations_url": "https://api.github.com/users/marcindulak/orgs",
"repos_url": "https://api.github.com/users/marcindulak/repos",
"events_url": "https://api.github.com/users/marcindulak/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcindulak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T08:48:25 | 2025-09-09T19:51:55 | 2025-09-09T19:51:55 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.36
- Python version: 3.13.5
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: (NA)
### Who can help?
@stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
docker run --rm -it python:3.13-slim-bookworm bash -c "python3 -m pip --version && python3 -m pip install transformers[torch] pillow; transformers env"
```
I'm adding an explicit pillow dependency due to https://github.com/huggingface/transformers/issues/39779
Output
```
pip 25.1.1 from /usr/local/lib/python3.13/site-packages/pip (python 3.13)
Collecting pillow
Downloading pillow-11.3.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (9.0 kB)
Collecting transformers[torch]
Downloading transformers-4.54.1-py3-none-any.whl.metadata (41 kB)
...
Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch>=2.1->transformers[torch])
Downloading nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
...
```
### Expected behavior
No nvidia dependenies are pulled when using instructions from https://huggingface.co/docs/transformers/installation, that say
> To install a CPU-only version of Transformers and a machine learning framework, run the following command.
> pip install 'transformers[torch]'
> uv pip install 'transformers[torch]'
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39780/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39779/comments | https://api.github.com/repos/huggingface/transformers/issues/39779/events | https://github.com/huggingface/transformers/issues/39779 | 3,276,199,526 | I_kwDOCUB6oc7DRtZm | 39,779 | transformers env fails with: ModuleNotFoundError: No module named 'PIL' | {
"login": "marcindulak",
"id": 3178318,
"node_id": "MDQ6VXNlcjMxNzgzMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3178318?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcindulak",
"html_url": "https://github.com/marcindulak",
"followers_url": "https://api.github.com/users/marcindulak/followers",
"following_url": "https://api.github.com/users/marcindulak/following{/other_user}",
"gists_url": "https://api.github.com/users/marcindulak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcindulak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcindulak/subscriptions",
"organizations_url": "https://api.github.com/users/marcindulak/orgs",
"repos_url": "https://api.github.com/users/marcindulak/repos",
"events_url": "https://api.github.com/users/marcindulak/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcindulak/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T08:45:55 | 2025-08-18T15:28:11 | 2025-08-18T15:28:11 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.1
- Platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.36
- Python version: 3.13.5
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: (NA)
### Who can help?
@stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using the command from https://huggingface.co/docs/transformers/installation
```
docker run --rm -it python:3.13-slim-bookworm bash -c "python3 -m pip --version && python3 -m pip install transformers[torch]; transformers env"
```
Output
```
pip 25.1.1 from /usr/local/lib/python3.13/site-packages/pip (python 3.13)
Collecting transformers[torch]
Downloading transformers-4.54.1-py3-none-any.whl.metadata (41 kB)
...
Successfully installed MarkupSafe-3.0.2 accelerate-1.9.0 certifi-2025.7.14 charset_normalizer-3.4.2 filelock-3.18.0 fsspec-2025.7.0 hf-xet-1.1.5 huggingface-hub-0.34.3 idna-3.10 jinja2-3.1.6 mpmath-1.3.0 networkx-3.5 numpy-2.3.2 nvidia-cublas-cu12-12.6.4.1 nvidia-cuda-cupti-cu12-12.6.80 nvidia-cuda-nvrtc-cu12-12.6.77 nvidia-cuda-runtime-cu12-12.6.77 nvidia-cudnn-cu12-9.5.1.17 nvidia-cufft-cu12-11.3.0.4 nvidia-cufile-cu12-1.11.1.6 nvidia-curand-cu12-10.3.7.77 nvidia-cusolver-cu12-11.7.1.2 nvidia-cusparse-cu12-12.5.4.2 nvidia-cusparselt-cu12-0.6.3 nvidia-nccl-cu12-2.26.2 nvidia-nvjitlink-cu12-12.6.85 nvidia-nvtx-cu12-12.6.77 packaging-25.0 psutil-7.0.0 pyyaml-6.0.2 regex-2025.7.31 requests-2.32.4 safetensors-0.5.3 setuptools-80.9.0 sympy-1.14.0 tokenizers-0.21.4 torch-2.7.1 tqdm-4.67.1 transformers-4.54.1 triton-3.3.1 typing-extensions-4.14.1 urllib3-2.5.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
Traceback (most recent call last):
File "/usr/local/bin/transformers", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/usr/local/lib/python3.13/site-packages/transformers/commands/transformers_cli.py", line 20, in <module>
from transformers.commands.chat import ChatCommand
File "/usr/local/lib/python3.13/site-packages/transformers/commands/chat.py", line 38, in <module>
from transformers.commands.serving import ServeArguments, ServeCommand
File "/usr/local/lib/python3.13/site-packages/transformers/commands/serving.py", line 34, in <module>
from PIL import Image
ModuleNotFoundError: No module named 'PIL'
```
### Expected behavior
`transformers env` run without error when following the installation steps from the documentation | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39779/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39778/comments | https://api.github.com/repos/huggingface/transformers/issues/39778/events | https://github.com/huggingface/transformers/pull/39778 | 3,276,109,756 | PR_kwDOCUB6oc6hTsR2 | 39,778 | [gemma3] update conversion key mapping | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T08:14:39 | 2025-08-11T07:21:13 | 2025-08-11T07:21:13 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39778",
"html_url": "https://github.com/huggingface/transformers/pull/39778",
"diff_url": "https://github.com/huggingface/transformers/pull/39778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39778.patch",
"merged_at": "2025-08-11T07:21:13"
} | # What does this PR do?
As per title, fixes https://github.com/huggingface/transformers/issues/39763 and ensures Sequence Classification model works with official checkpoints | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39778/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39778/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39777/comments | https://api.github.com/repos/huggingface/transformers/issues/39777/events | https://github.com/huggingface/transformers/pull/39777 | 3,276,096,006 | PR_kwDOCUB6oc6hTpUn | 39,777 | [VLMs] split out "get placeholder mask" to helper | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T08:10:18 | 2025-08-01T08:01:07 | 2025-08-01T08:01:07 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39777",
"html_url": "https://github.com/huggingface/transformers/pull/39777",
"diff_url": "https://github.com/huggingface/transformers/pull/39777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39777.patch",
"merged_at": "2025-08-01T08:01:07"
} | # What does this PR do?
As per title. TBH the helper is same for most models (excluding Qwen or other special arch), so we could just move it to the base modeling class. I thought it's against transformers philosophy, so that's why we have near-same code copied everywhere
Half of these are anyway handled by modular and copied from LLaVA | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39777/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39776/comments | https://api.github.com/repos/huggingface/transformers/issues/39776/events | https://github.com/huggingface/transformers/issues/39776 | 3,275,983,966 | I_kwDOCUB6oc7DQ4xe | 39,776 | BioGPT Implementation Bug Report | {
"login": "SunnyThakur25",
"id": 110617757,
"node_id": "U_kgDOBpfknQ",
"avatar_url": "https://avatars.githubusercontent.com/u/110617757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunnyThakur25",
"html_url": "https://github.com/SunnyThakur25",
"followers_url": "https://api.github.com/users/SunnyThakur25/followers",
"following_url": "https://api.github.com/users/SunnyThakur25/following{/other_user}",
"gists_url": "https://api.github.com/users/SunnyThakur25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunnyThakur25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunnyThakur25/subscriptions",
"organizations_url": "https://api.github.com/users/SunnyThakur25/orgs",
"repos_url": "https://api.github.com/users/SunnyThakur25/repos",
"events_url": "https://api.github.com/users/SunnyThakur25/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunnyThakur25/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T07:27:56 | 2025-07-30T12:17:11 | 2025-07-30T12:13:22 | NONE | null | null | null | null | ### System Info
# BioGPT Implementation Bug Report
**Repository**: transformers
**Component**: `src/transformers/models/biogpt/modular_biogpt.py`
**Reporter**: SunnyThakur
**Date**: July 30, 2025
**Priority**: High
## Executive Summary
This report identifies critical runtime errors and implementation issues in the BioGPT model implementation that prevent the model from functioning correctly. The issues range from missing method implementations to device assignment errors that would cause immediate failures in production environments.
## Critical Issues
### Issue #1: Missing Return Statement in Positional Embedding
**Location**: `BioGptLearnedPositionalEmbedding.forward()` (Line ~65)
**Severity**: Critical
**Impact**: Runtime failure - method returns None instead of embeddings
**Current Code**:
```python
def forward(
self,
attention_mask: torch.LongTensor,
past_key_values_length: int = 0,
position_ids: Optional[torch.LongTensor] = None,
):
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
super().forward(attention_mask, past_key_values_length, position_ids)
```
**Issue**: The method calls `super().forward()` but doesn't return the result, causing downstream components to receive `None`.
**Proposed Fix**:
```python
def forward(
self,
attention_mask: torch.LongTensor,
past_key_values_length: int = 0,
position_ids: Optional[torch.LongTensor] = None,
):
"""`input_ids_shape` is expected to be [bsz x seqlen]."""
return super().forward(attention_mask, past_key_values_length, position_ids)
```
### Issue #2: Missing Loss Function Implementation
**Location**: `BioGptForCausalLM.forward()` (Line ~650)
**Severity**: Critical
**Impact**: AttributeError - method does not exist
**Current Code**:
```python
lm_loss = self.loss_function(
prediction_scores,
labels,
vocab_size=self.config.vocab_size,
**kwargs,
)
```
**Issue**: The `loss_function` method is not defined anywhere in the class hierarchy.
**Proposed Fix**:
```python
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = prediction_scores[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
lm_loss = loss_fct(shift_logits.view(-1, self.config.vocab_size), shift_labels.view(-1))
```
### Issue #3: Device Assignment Error
**Location**: `BioGptPreTrainedModel._update_causal_mask()` (Line ~253)
**Severity**: Critical
**Impact**: AttributeError when attention_mask is None
**Current Code**:
```python
elif attention_mask is None:
attention_mask = make_flex_block_causal_mask(
torch.ones(
size=(input_tensor.shape[0], input_tensor.shape[1]),
device=attention_mask.device, # attention_mask is None!
)
)
```
**Issue**: When `attention_mask` is `None`, accessing `.device` attribute raises AttributeError.
**Proposed Fix**:
```python
elif attention_mask is None:
attention_mask = make_flex_block_causal_mask(
torch.ones(
size=(input_tensor.shape[0], input_tensor.shape[1]),
device=input_tensor.device,
)
)
```
## Medium Priority Issues
### Issue #4: Misleading Error Messages
**Location**: `BioGptModel.forward()` (Line ~431)
**Severity**: Medium
**Impact**: Developer confusion
**Current Code**:
```python
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
```
**Issue**: Error message references `decoder_input_ids` but parameter is `input_ids`.
**Proposed Fix**:
```python
if (input_ids is None) ^ (inputs_embeds is not None):
raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
```
### Issue #5: Unsafe Attribute Deletion
**Location**: `BioGptDecoderLayer.__init__()` (Line ~108-109)
**Severity**: Medium
**Impact**: Potential AttributeError if parent class changes
**Current Code**:
```python
del self.encoder_attn
del self.encoder_attn_layer_norm
```
**Proposed Fix**:
```python
if hasattr(self, 'encoder_attn'):
del self.encoder_attn
if hasattr(self, 'encoder_attn_layer_norm'):
del self.encoder_attn_layer_norm
```
### Issue #6: Redundant Code
**Location**: `BioGptModel.forward()` (Lines ~454 and ~490)
**Severity**: Low
**Impact**: Code maintainability
**Issue**: Gradient checkpointing compatibility check is performed twice in the same method.
## Testing Recommendations
To prevent regression and ensure fixes work correctly:
1. **Unit Tests**:
- Test `BioGptLearnedPositionalEmbedding.forward()` returns correct tensor shape
- Test `BioGptForCausalLM` with labels to verify loss calculation
- Test flex attention path with `attention_mask=None`
2. **Integration Tests**:
- Test model loading and forward pass with various input combinations
- Test gradient checkpointing with different cache settings
- Test all model heads (CausalLM, TokenClassification, SequenceClassification)
3. **Edge Case Tests**:
- Empty sequences
- Single token inputs
- Mixed device scenarios
## Compatibility Impact
These fixes maintain backward compatibility with existing model checkpoints and usage patterns. No changes to model architecture or parameter names are required.
## Conclusion
The identified issues prevent the BioGPT implementation from functioning correctly in its current state. The critical issues require immediate attention to ensure model usability. All proposed fixes are minimal, targeted changes that preserve the intended functionality while resolving the implementation bugs.
We recommend prioritizing the critical issues for the next patch release to prevent user-facing errors when using BioGPT models.
---
**Contact**: sunny48445@gmail.com
**Additional Notes**: All testing was performed against the current main branch. Code review was conducted following HuggingFace Transformers coding standards and patterns.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
"""
BioGPT Bug Reproduction Script
Demonstrates critical issues in the BioGPT implementation
"""
```python
import torch
from transformers import BioGptConfig, BioGptModel, BioGptForCausalLM, BioGptTokenizer
import traceback
def test_issue_1_missing_return_statement():
"""
Issue #1: BioGptLearnedPositionalEmbedding.forward() missing return statement
This will cause the positional embeddings to be None, leading to errors downstream
"""
print("=" * 60)
print("Testing Issue #1: Missing return statement in positional embedding")
print("=" * 60)
try:
config = BioGptConfig(
vocab_size=42384,
hidden_size=1024,
num_hidden_layers=24,
num_attention_heads=16,
intermediate_size=4096,
max_position_embeddings=1024,
pad_token_id=1,
)
model = BioGptModel(config)
# Create sample input
input_ids = torch.tensor([[1, 2, 3, 4, 5]])
attention_mask = torch.tensor([[1, 1, 1, 1, 1]])
# This should work but will fail due to missing return in positional embedding
output = model(
input_ids=input_ids,
attention_mask=attention_mask
)
print("✓ Model forward pass completed (unexpected - bug may be fixed)")
except Exception as e:
print(f"✗ Error encountered: {type(e).__name__}: {str(e)}")
print("This error is caused by the missing return statement in BioGptLearnedPositionalEmbedding.forward()")
traceback.print_exc()
def test_issue_2_missing_loss_function():
"""
Issue #2: BioGptForCausalLM calls non-existent self.loss_function()
"""
print("\n" + "=" * 60)
print("Testing Issue #2: Missing loss_function method")
print("=" * 60)
try:
config = BioGptConfig(
vocab_size=42384,
hidden_size=1024,
num_hidden_layers=24,
num_attention_heads=16,
intermediate_size=4096,
max_position_embeddings=1024,
pad_token_id=1,
)
model = BioGptForCausalLM(config)
# Create sample input with labels to trigger loss calculation
input_ids = torch.tensor([[1, 2, 3, 4, 5]])
labels = torch.tensor([[2, 3, 4, 5, 1]]) # Shifted for causal LM
# This will fail when trying to calculate loss
output = model(
input_ids=input_ids,
labels=labels
)
print("✓ Loss calculation completed (unexpected - bug may be fixed)")
except AttributeError as e:
if "loss_function" in str(e):
print(f"✗ Expected AttributeError: {str(e)}")
print("This error is caused by calling non-existent self.loss_function() method")
else:
print(f"✗ Unexpected AttributeError: {str(e)}")
traceback.print_exc()
except Exception as e:
print(f"✗ Unexpected error: {type(e).__name__}: {str(e)}")
traceback.print_exc()
def test_issue_3_device_assignment_error():
"""
Issue #3: Device assignment error when attention_mask is None
This specifically tests the flex attention path
"""
print("\n" + "=" * 60)
print("Testing Issue #3: Device assignment error with flex attention")
print("=" * 60)
try:
config = BioGptConfig(
vocab_size=42384,
hidden_size=1024,
num_hidden_layers=2, # Smaller for testing
num_attention_heads=16,
intermediate_size=4096,
max_position_embeddings=1024,
pad_token_id=1,
_attn_implementation="flex_attention" # Force flex attention
)
model = BioGptModel(config)
# Create sample input WITHOUT attention_mask to trigger the bug
input_ids = torch.tensor([[1, 2, 3, 4, 5]])
# This should trigger the device assignment error in _update_causal_mask
output = model(
input_ids=input_ids,
attention_mask=None # This triggers the bug
)
print("✓ Model forward pass with None attention_mask completed")
except AttributeError as e:
if "NoneType" in str(e) and "device" in str(e):
print(f"✗ Expected AttributeError: {str(e)}")
print("This error is caused by trying to access .device on None attention_mask")
else:
print(f"✗ Unexpected AttributeError: {str(e)}")
traceback.print_exc()
except Exception as e:
print(f"✗ Error: {type(e).__name__}: {str(e)}")
# This might fail for other reasons (flex attention not available, etc.)
print("Note: This test requires flex attention support")
def test_issue_4_misleading_error_message():
"""
Issue #4: Misleading error message in input validation
"""
print("\n" + "=" * 60)
print("Testing Issue #4: Misleading error message")
print("=" * 60)
try:
config = BioGptConfig(
vocab_size=42384,
hidden_size=1024,
num_hidden_layers=2,
num_attention_heads=16,
)
model = BioGptModel(config)
# Pass both input_ids and inputs_embeds to trigger validation error
input_ids = torch.tensor([[1, 2, 3, 4, 5]])
inputs_embeds = torch.randn(1, 5, 1024)
output = model(
input_ids=input_ids,
inputs_embeds=inputs_embeds # This should trigger validation error
)
print("✓ No validation error (unexpected)")
except ValueError as e:
error_msg = str(e)
print(f"✗ ValueError: {error_msg}")
if "decoder_input_ids" in error_msg:
print("This error message is misleading - it mentions 'decoder_input_ids' but parameter is 'input_ids'")
else:
print("Error message may have been fixed or differs from expected")
def demonstrate_working_example():
"""
Show what should work when bugs are fixed
"""
print("\n" + "=" * 60)
print("Demonstrating expected working behavior")
print("=" * 60)
print("Expected working code (after fixes):")
print("""
# After fixes, this should work:
from transformers import BioGptForCausalLM, BioGptTokenizer
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
text = "COVID-19 is a disease caused by"
inputs = tokenizer(text, return_tensors="pt")
# Generation should work
outputs = model.generate(**inputs, max_length=50)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Training with labels should work
labels = inputs.input_ids.clone()
loss_output = model(**inputs, labels=labels)
loss = loss_output.loss
""")
def main():
"""
Run all reproduction tests
"""
print("BioGPT Bug Reproduction Script")
print("This script demonstrates critical bugs in the BioGPT implementation")
print(f"PyTorch version: {torch.__version__}")
# Test each issue
test_issue_1_missing_return_statement()
test_issue_2_missing_loss_function()
test_issue_3_device_assignment_error()
test_issue_4_misleading_error_message()
demonstrate_working_example()
print("\n" + "=" * 60)
print("SUMMARY")
print("=" * 60)
print("The above tests demonstrate critical bugs that prevent BioGPT from working.")
print("These issues need to be fixed for the model to function correctly.")
print("See the bug report for detailed fixes and explanations.")
if __name__ == "__main__":
main()
```
```
# Configuration Information:
# - PyTorch version: Any recent version (tested with 1.13+)
# - Transformers: Latest main branch with BioGPT implementation
# - No special training configs needed - these are basic forward pass bugs
# - Hardware: CPU or GPU (bugs occur on both)
# - Python: 3.8+
# To run this script:
# 1. Install transformers from source with BioGPT implementation
# 2. Run: python reproduce_biogpt_bugs.py
# 3. Observe the various error types that demonstrate each bug
# Expected Outputs:
# - Issue #1: TypeError or AttributeError related to None embeddings
# - Issue #2: AttributeError about missing 'loss_function' method
# - Issue #3: AttributeError about NoneType having no 'device' attribute
# - Issue #4: ValueError with misleading parameter names
```
### Expected behavior
# Expected Behavior
## Basic Model Operations
### Model Loading and Initialization
```python
from transformers import BioGptConfig, BioGptModel, BioGptForCausalLM
# Should successfully create model instances without errors
config = BioGptConfig()
model = BioGptModel(config)
causal_model = BioGptForCausalLM(config)
```
**Expected**: Models instantiate successfully with all components properly initialized.
### Forward Pass (Base Model)
```python
import torch
input_ids = torch.tensor([[1, 2, 3, 4, 5]])
attention_mask = torch.tensor([[1, 1, 1, 1, 1]])
outputs = model(input_ids=input_ids, attention_mask=attention_mask)
```
**Expected**:
- Returns `BaseModelOutputWithPastAndCrossAttentions` object
- `outputs.last_hidden_state` has shape `(batch_size, sequence_length, hidden_size)`
- `outputs.past_key_values` contains proper key-value cache when `use_cache=True`
- No runtime errors or exceptions
### Causal Language Modeling
```python
# Forward pass without labels (inference)
outputs = causal_model(input_ids=input_ids, attention_mask=attention_mask)
# Forward pass with labels (training)
labels = torch.tensor([[2, 3, 4, 5, 1]]) # Next token prediction targets
outputs_with_loss = causal_model(input_ids=input_ids, labels=labels)
```
**Expected**:
- **Without labels**: Returns logits with shape `(batch_size, sequence_length, vocab_size)`
- **With labels**: Returns both logits and computed cross-entropy loss
- Loss should be a scalar tensor that can be used for backpropagation
- No AttributeError about missing loss functions
### Text Generation
```python
# Should work with generate() method
generated = causal_model.generate(
input_ids=input_ids,
max_length=20,
do_sample=True,
temperature=0.7
)
```
**Expected**:
- Returns tensor with generated token IDs
- Generated sequence extends beyond input length
- No errors during generation process
## Advanced Features
### Positional Embeddings
```python
# Should return proper embeddings, not None
pos_embeddings = model.embed_positions(
attention_mask=attention_mask,
past_key_values_length=0
)
```
**Expected**: Returns tensor with shape `(batch_size, sequence_length, hidden_size)`, not `None`
### Attention Mechanisms
```python
# Should work with different attention implementations
outputs_sdpa = model(input_ids, attention_mask=attention_mask) # SDPA
outputs_flash = model(input_ids, attention_mask=attention_mask) # Flash Attention
outputs_flex = model(input_ids, attention_mask=attention_mask) # Flex Attention
```
**Expected**: All attention implementations produce valid outputs without device errors
### Input Flexibility
```python
# Should work with None attention_mask
outputs = model(input_ids=input_ids, attention_mask=None)
# Should work with custom position_ids
position_ids = torch.arange(0, input_ids.size(-1)).unsqueeze(0)
outputs = model(input_ids=input_ids, position_ids=position_ids)
# Should work with inputs_embeds instead of input_ids
inputs_embeds = model.embed_tokens(input_ids)
outputs = model(inputs_embeds=inputs_embeds)
```
**Expected**:
- Model handles `None` attention_mask by creating appropriate causal mask
- Custom position_ids are properly processed
- `inputs_embeds` works as alternative to `input_ids`
## Task-Specific Heads
### Token Classification
```python
from transformers import BioGptForTokenClassification
token_model = BioGptForTokenClassification(config)
outputs = token_model(input_ids=input_ids, labels=labels)
```
**Expected**: Returns logits for each token and computes classification loss when labels provided
### Sequence Classification
```python
from transformers import BioGptForSequenceClassification
seq_model = BioGptForSequenceClassification(config)
outputs = seq_model(input_ids=input_ids, labels=labels)
```
**Expected**: Returns single classification logit per sequence and computes loss when labels provided
## Error Handling
### Input Validation
```python
# Should raise clear, accurate error messages
try:
model(input_ids=input_ids, inputs_embeds=inputs_embeds) # Both provided
except ValueError as e:
# Error message should mention correct parameter names
assert "input_ids" in str(e) and "inputs_embeds" in str(e)
assert "decoder_input_ids" not in str(e) # Should not mention non-existent params
```
**Expected**: Error messages accurately reflect actual parameter names and constraints
### Gradient Checkpointing
```python
model.gradient_checkpointing_enable()
outputs = model(input_ids=input_ids, use_cache=False) # Should work
```
**Expected**: No warnings about incompatible settings, proper memory-efficient computation
## Performance and Memory
### Caching Behavior
```python
# Should properly handle key-value caching
outputs1 = model(input_ids=input_ids, use_cache=True)
past_key_values = outputs1.past_key_values
# Second forward pass should reuse cache
new_input = torch.tensor([[6]]) # Next token
outputs2 = model(input_ids=new_input, past_key_values=past_key_values, use_cache=True)
```
**Expected**:
- Caching reduces computation for sequential generation
- Cache format is consistent and reusable
- No cache-related errors or warnings
### Device Consistency
```python
# Should work consistently across devices
if torch.cuda.is_available():
model = model.cuda()
input_ids = input_ids.cuda()
outputs = model(input_ids=input_ids)
```
**Expected**: All tensors remain on correct device throughout computation
## Integration with Transformers Ecosystem
### Pretrained Model Loading
```python
# Should work with hub integration (when model is available)
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
```
**Expected**: Seamless loading from Hugging Face Hub without initialization errors
### Tokenizer Compatibility
```python
from transformers import BioGptTokenizer
tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
text = "COVID-19 is a disease"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
**Expected**: Perfect integration between tokenizer and model without shape mismatches
### Training Integration
```python
from transformers import Trainer, TrainingArguments
# Should work with Trainer API
training_args = TrainingArguments(output_dir="./results", num_train_epochs=1)
trainer = Trainer(model=causal_model, args=training_args, train_dataset=dataset)
trainer.train()
```
**Expected**: Seamless integration with Transformers training pipeline | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39776/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39775/comments | https://api.github.com/repos/huggingface/transformers/issues/39775/events | https://github.com/huggingface/transformers/issues/39775 | 3,275,938,845 | I_kwDOCUB6oc7DQtwd | 39,775 | Granite 4.0 Tiny Preview inference broken in | {
"login": "nataxcan",
"id": 8396268,
"node_id": "MDQ6VXNlcjgzOTYyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8396268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nataxcan",
"html_url": "https://github.com/nataxcan",
"followers_url": "https://api.github.com/users/nataxcan/followers",
"following_url": "https://api.github.com/users/nataxcan/following{/other_user}",
"gists_url": "https://api.github.com/users/nataxcan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nataxcan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nataxcan/subscriptions",
"organizations_url": "https://api.github.com/users/nataxcan/orgs",
"repos_url": "https://api.github.com/users/nataxcan/repos",
"events_url": "https://api.github.com/users/nataxcan/events{/privacy}",
"received_events_url": "https://api.github.com/users/nataxcan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T07:09:57 | 2025-08-28T12:14:59 | 2025-08-28T12:14:58 | NONE | null | null | null | null | ### System Info
my environment:
```
- `transformers` version: 4.54.1
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.39
- Python version: 3.12.11
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA L40S
```
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
Running simple inference returns nonsense output when using `ibm-granite/granite-4.0-tiny-preview`, not sure what's causing this but I see a lot of changes.
normal model output (up to `transformers==4.53.3`) is:
`prediction: "Your dog's name is Jonathan."`
on both `4.54.0` and `4.54.1` I get:
`prediction: " the the the the the the the the the the the the the the the`
notes:
- this is both while using the fast path and the slow path
- it happens on all GPUs I've tried (L40s, L4)
- It does not happen when running `mistralai/Mamba-Codestral-7B-v0.1`
Here's the script I'm running:
```
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "7"
from transformers import GraniteMoeHybridForCausalLM, AutoTokenizer
model_name = 'ibm-granite/granite-4.0-tiny-preview'
model = GraniteMoeHybridForCausalLM.from_pretrained(
model_name,
).to('cuda')
tokenizer = AutoTokenizer.from_pretrained(
model_name
)
# baseline
conv = [{"role": "user", "content":"My dog is called Jonathan. Wait, what's the name of my dog?"}]
input_ids = tokenizer.apply_chat_template(
conv, return_tensors="pt",
return_dict=True).to(model.device)
output = model.generate(
**input_ids,
max_new_tokens=16,
)
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
print('prediction:', prediction)
```
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. install latest Transformers
2. generate output using granite 4 tiny preview
3. notice the output is nonsense
### Expected behavior
model should output text that makes sense given input | {
"login": "nataxcan",
"id": 8396268,
"node_id": "MDQ6VXNlcjgzOTYyNjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8396268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nataxcan",
"html_url": "https://github.com/nataxcan",
"followers_url": "https://api.github.com/users/nataxcan/followers",
"following_url": "https://api.github.com/users/nataxcan/following{/other_user}",
"gists_url": "https://api.github.com/users/nataxcan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nataxcan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nataxcan/subscriptions",
"organizations_url": "https://api.github.com/users/nataxcan/orgs",
"repos_url": "https://api.github.com/users/nataxcan/repos",
"events_url": "https://api.github.com/users/nataxcan/events{/privacy}",
"received_events_url": "https://api.github.com/users/nataxcan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39775/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39774/comments | https://api.github.com/repos/huggingface/transformers/issues/39774/events | https://github.com/huggingface/transformers/issues/39774 | 3,275,912,440 | I_kwDOCUB6oc7DQnT4 | 39,774 | Blip model got performance regression on compile mode after refactor cache. | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-30T06:59:09 | 2025-07-30T12:52:04 | 2025-07-30T12:52:04 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers version: 4.55.0.dev0
Platform: Linux-6.11.0-28-generic-x86_64-with-glibc2.35
Python version: 3.11.13
Huggingface_hub version: 0.34.2
Safetensors version: 0.5.3
Accelerate version: 1.8.1
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (accelerator?): 2.9.0.dev20250714+cpu (NA)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
TORCH_LOGS="+graph_breaks,+recompiles" python test.py
```python
import time
import requests
import torch
import PIL.Image
from transformers import pipeline
model_id = "Salesforce/blip-image-captioning-base"
image_to_text = pipeline("image-to-text", model=model_id, device="cpu", torch_dtype=torch.float16)
image_url = "https://ankur3107.github.io/assets/images/image-captioning-example.png"
image = PIL.Image.open(requests.get(image_url, stream=True, timeout=3000).raw)
for _ in range(10):
output = image_to_text(image)
start = time.time()
output = image_to_text(image)
end = time.time()
print(f"eager mode pipeline latency {end - start}")
image_to_text.model.vision_model.forward = torch.compile(image_to_text.model.vision_model.forward, backend=args.backend)
image_to_text.model.text_decoder.forward = torch.compile(image_to_text.model.text_decoder.forward, backend=args.backend)
for _ in range(10):
output = image_to_text(image)
start = time.time()
output = image_to_text(image)
end = time.time()
print(f"compile mode pipeline latency {end - start}")
```
Output logs:
```
W0730 06:58:23.995000 2266976 torch/_dynamo/convert_frame.py:1067] [12/8] torch._dynamo hit config.recompile_limit (8)
W0730 06:58:23.995000 2266976 torch/_dynamo/convert_frame.py:1067] [12/8] function: 'forward' (/home/jiqing/transformers/src/transformers/models/blip/modeling_blip_text.py:358)
W0730 06:58:23.995000 2266976 torch/_dynamo/convert_frame.py:1067] [12/8] last reason: 12/7: tensor 'past_key_value.self_attention_cache.layers[7].keys' size mismatch at index 2. expected 1, actual 2
W0730 06:58:23.995000 2266976 torch/_dynamo/convert_frame.py:1067] [12/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W0730 06:58:23.995000 2266976 torch/_dynamo/convert_frame.py:1067] [12/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html
W0730 06:58:30.593000 2266976 torch/_dynamo/convert_frame.py:1067] [11/8] torch._dynamo hit config.recompile_limit (8)
W0730 06:58:30.593000 2266976 torch/_dynamo/convert_frame.py:1067] [11/8] function: '__call__' (/home/jiqing/transformers/src/transformers/modeling_layers.py:61)
W0730 06:58:30.593000 2266976 torch/_dynamo/convert_frame.py:1067] [11/8] last reason: 11/7: len(args[5].is_updated) == 6
W0730 06:58:30.593000 2266976 torch/_dynamo/convert_frame.py:1067] [11/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W0730 06:58:30.593000 2266976 torch/_dynamo/convert_frame.py:1067] [11/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html
```
### Expected behavior
Before the PR https://github.com/huggingface/transformers/pull/38635, the script runs well and can get 1.5x speed-up. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39774/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39774/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39773/comments | https://api.github.com/repos/huggingface/transformers/issues/39773/events | https://github.com/huggingface/transformers/pull/39773 | 3,275,890,451 | PR_kwDOCUB6oc6hS87S | 39,773 | enable static cache on vision encoder decoder | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T06:49:37 | 2025-07-30T08:11:17 | 2025-07-30T08:10:46 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39773",
"html_url": "https://github.com/huggingface/transformers/pull/39773",
"diff_url": "https://github.com/huggingface/transformers/pull/39773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39773.patch",
"merged_at": "2025-07-30T08:10:46"
} | After the issue #39746 fixed, the vision encoder decoder model can support static cache and can got more significant speed-up
```python
import time
import requests
import torch
import PIL.Image
from transformers import pipeline
model_id = "nlpconnect/vit-gpt2-image-captioning"
image_to_text = pipeline("image-to-text", model=model_id, device="cpu", torch_dtype=torch.float16)
image_url = "https://ankur3107.github.io/assets/images/image-captioning-example.png"
image = PIL.Image.open(requests.get(image_url, stream=True, timeout=3000).raw)
generation_config = image_to_text.model.generation_config
generation_config.cache_implementation = "static"
for _ in range(10):
output = image_to_text(image, generate_kwargs={"generation_config": generation_config})
start = time.time()
output = image_to_text(image, generate_kwargs={"generation_config": generation_config})
end = time.time()
print(f"eager mode pipeline latency {end - start}")
image_to_text.model.forward = torch.compile(image_to_text.model.forward)
for _ in range(10):
output = image_to_text(image, generate_kwargs={"generation_config": generation_config})
start = time.time()
output = image_to_text(image, generate_kwargs={"generation_config": generation_config})
end = time.time()
print(f"compile mode pipeline latency {end - start}")
``` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39773/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39772/comments | https://api.github.com/repos/huggingface/transformers/issues/39772/events | https://github.com/huggingface/transformers/pull/39772 | 3,275,639,238 | PR_kwDOCUB6oc6hSHQF | 39,772 | Fix missing initializations for models created in 2022 | {
"login": "bvantuan",
"id": 37981884,
"node_id": "MDQ6VXNlcjM3OTgxODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37981884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bvantuan",
"html_url": "https://github.com/bvantuan",
"followers_url": "https://api.github.com/users/bvantuan/followers",
"following_url": "https://api.github.com/users/bvantuan/following{/other_user}",
"gists_url": "https://api.github.com/users/bvantuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bvantuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bvantuan/subscriptions",
"organizations_url": "https://api.github.com/users/bvantuan/orgs",
"repos_url": "https://api.github.com/users/bvantuan/repos",
"events_url": "https://api.github.com/users/bvantuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bvantuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-30T04:36:35 | 2025-08-20T12:19:40 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39772",
"html_url": "https://github.com/huggingface/transformers/pull/39772",
"diff_url": "https://github.com/huggingface/transformers/pull/39772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39772.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes missing weight initializations for models created in 2022.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Cyrilvallez
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39772/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39771/comments | https://api.github.com/repos/huggingface/transformers/issues/39771/events | https://github.com/huggingface/transformers/issues/39771 | 3,275,352,527 | I_kwDOCUB6oc7DOenP | 39,771 | would it be possible to standardize on the vx.y.z format for all tags | {
"login": "hubutui",
"id": 2948593,
"node_id": "MDQ6VXNlcjI5NDg1OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2948593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hubutui",
"html_url": "https://github.com/hubutui",
"followers_url": "https://api.github.com/users/hubutui/followers",
"following_url": "https://api.github.com/users/hubutui/following{/other_user}",
"gists_url": "https://api.github.com/users/hubutui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hubutui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hubutui/subscriptions",
"organizations_url": "https://api.github.com/users/hubutui/orgs",
"repos_url": "https://api.github.com/users/hubutui/repos",
"events_url": "https://api.github.com/users/hubutui/events{/privacy}",
"received_events_url": "https://api.github.com/users/hubutui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-30T00:46:00 | 2025-09-07T08:02:48 | 2025-09-07T08:02:48 | NONE | null | null | null | null | This git repo used to use vx.y.z, but the latest is 4.54.1, which is inconsistent. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39771/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39770/comments | https://api.github.com/repos/huggingface/transformers/issues/39770/events | https://github.com/huggingface/transformers/pull/39770 | 3,275,209,122 | PR_kwDOCUB6oc6hQtD_ | 39,770 | [Bugfix] Fix `AutoModel.from_pretrained(..., quantization_config=None)` regression | {
"login": "kylesayrs",
"id": 17103692,
"node_id": "MDQ6VXNlcjE3MTAzNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17103692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylesayrs",
"html_url": "https://github.com/kylesayrs",
"followers_url": "https://api.github.com/users/kylesayrs/followers",
"following_url": "https://api.github.com/users/kylesayrs/following{/other_user}",
"gists_url": "https://api.github.com/users/kylesayrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylesayrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylesayrs/subscriptions",
"organizations_url": "https://api.github.com/users/kylesayrs/orgs",
"repos_url": "https://api.github.com/users/kylesayrs/repos",
"events_url": "https://api.github.com/users/kylesayrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylesayrs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-29T23:09:01 | 2025-09-04T15:59:20 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39770",
"html_url": "https://github.com/huggingface/transformers/pull/39770",
"diff_url": "https://github.com/huggingface/transformers/pull/39770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39770.patch",
"merged_at": null
} | ## Purpose ##
* Fix bug where passing `quantization_config=None` to `AutoModelForCausalLM` will lead to an error
```
Traceback (most recent call last):
File "/home/kyle/llm-compressor/asdf.py", line 2, in <module>
model = AutoModelForCausalLM.from_pretrained("RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16", quantization_config=None)
File "/home/kyle/transformers/src/transformers/models/auto/auto_factory.py", line 547, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/kyle/transformers/src/transformers/models/auto/configuration_auto.py", line 1277, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
File "/home/kyle/transformers/src/transformers/configuration_utils.py", line 817, in from_dict
logger.info(f"Model config {config}")
File "/home/kyle/transformers/src/transformers/configuration_utils.py", line 851, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "/home/kyle/transformers/src/transformers/configuration_utils.py", line 963, in to_json_string
config_dict = self.to_diff_dict()
File "/home/kyle/transformers/src/transformers/configuration_utils.py", line 865, in to_diff_dict
config_dict = self.to_dict()
File "/home/kyle/transformers/src/transformers/configuration_utils.py", line 942, in to_dict
self.quantization_config.to_dict()
AttributeError: 'NoneType' object has no attribute 'to_dict'
```
* This option was supported in previous releases, and is documented in docstring
```
quantization_config (`Union[QuantizationConfigMixin,Dict]`, *optional*):
```
* I'm not exactly sure what change caused the regression, still trying to track that down
## Changes ##
* Ensuring that the quantization config is never overwritten when instantiating from `_BaseAutoModelClass`
## Testing ##
```python3
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16", quantization_config=None)
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39770/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39770/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39769/comments | https://api.github.com/repos/huggingface/transformers/issues/39769/events | https://github.com/huggingface/transformers/pull/39769 | 3,274,981,091 | PR_kwDOCUB6oc6hP-lu | 39,769 | Fix Evolla and xLSTM tests | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T21:01:52 | 2025-08-11T15:15:00 | 2025-07-30T07:51:55 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39769",
"html_url": "https://github.com/huggingface/transformers/pull/39769",
"diff_url": "https://github.com/huggingface/transformers/pull/39769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39769.patch",
"merged_at": "2025-07-30T07:51:55"
} | # What does this PR do?
As per the title | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39769/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39768/comments | https://api.github.com/repos/huggingface/transformers/issues/39768/events | https://github.com/huggingface/transformers/pull/39768 | 3,274,911,235 | PR_kwDOCUB6oc6hPva8 | 39,768 | Benchmarking improvements | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T20:32:47 | 2025-08-25T11:41:19 | 2025-08-15T13:59:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39768",
"html_url": "https://github.com/huggingface/transformers/pull/39768",
"diff_url": "https://github.com/huggingface/transformers/pull/39768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39768.patch",
"merged_at": "2025-08-15T13:59:11"
} | # What does this PR do?
The goal of this PR is to start improving the benchmarking infrastructure of Transformers. In the short-term we'd like to build a dataset similar to what Diffusers did (https://huggingface.co/datasets/diffusers/benchmarks) and put a dashboard on Spaces.
This PR starts with the data acquisition part by adding an option to generate CSV files that we can later on upload to Datasets. In the meantime, it keeps the existing Postgres path to Grafana in place.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39768/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39767/comments | https://api.github.com/repos/huggingface/transformers/issues/39767/events | https://github.com/huggingface/transformers/issues/39767 | 3,274,886,306 | I_kwDOCUB6oc7DMsyi | 39,767 | Model with non-string type property tool giving incomplete response using VLLM | {
"login": "anyon17",
"id": 174828730,
"node_id": "U_kgDOCmusug",
"avatar_url": "https://avatars.githubusercontent.com/u/174828730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anyon17",
"html_url": "https://github.com/anyon17",
"followers_url": "https://api.github.com/users/anyon17/followers",
"following_url": "https://api.github.com/users/anyon17/following{/other_user}",
"gists_url": "https://api.github.com/users/anyon17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anyon17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anyon17/subscriptions",
"organizations_url": "https://api.github.com/users/anyon17/orgs",
"repos_url": "https://api.github.com/users/anyon17/repos",
"events_url": "https://api.github.com/users/anyon17/events{/privacy}",
"received_events_url": "https://api.github.com/users/anyon17/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T20:23:34 | 2025-09-07T08:02:50 | 2025-09-07T08:02:50 | NONE | null | null | null | null | ### System Info
I am using vllm with serve running it using the following command
`vllm serve Qwen/Qwen2.5-7B-Instruct --tool-call-parser hermes --enable-auto-tool-choice`
When I use below curl
```
curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer EMPTY' \
--data '{
"model": "Qwen/Qwen2.5-7B-Instruct",
"messages": [
{
"role": "user",
"content": "Can you add 2 and 7 ?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "arithmetic__add",
"description": "Adds two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {
"type": "number",
"description": "first number"
},
"y": {
"type": "number",
"description": "second number"
}
},
"required": [
"x",
"y"
]
}
}
}
],
"tool_choice": "auto",
"stream": true
}'
```
I get following chunks:
```
Some([IntermediateToolCallContent { function: Name("arithmetic__add") }])
Some([IntermediateToolCallContent { function: Arguments("{\"x\":") }])
Some([IntermediateToolCallContent { function: Arguments(" ") }])
Some([IntermediateToolCallContent { function: Arguments("7") }])
Some([IntermediateToolCallContent { function: Arguments("}") }])
```
Which is incorrect, because "y" is missing.
When I change the types of x and y to be "string"
And use the below curl
```
curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer EMPTY' \
--data '{
"model": "Qwen/Qwen2.5-7B-Instruct",
"messages": [
{
"role": "user",
"content": "Can you add 2 and 7 ?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "arithmetic__add",
"description": "Adds two numbers",
"parameters": {
"type": "object",
"properties": {
"x": {
"type": "string",
"description": "first number"
},
"y": {
"type": "string",
"description": "second number"
}
},
"required": [
"x",
"y"
]
}
}
}
],
"tool_choice": "auto",
"stream": true
}'
```
It gives correct output with string quotes of course:
```
Some([IntermediateToolCallContent { function: Name("arithmetic__add") }])
Some([IntermediateToolCallContent { function: Arguments("{\"x\": \"") }])
Some([IntermediateToolCallContent { function: Arguments("2") }])
Some([IntermediateToolCallContent { function: Arguments("\",") }])
Some([IntermediateToolCallContent { function: Arguments(" \"") }])
Some([IntermediateToolCallContent { function: Arguments("y") }])
Some([IntermediateToolCallContent { function: Arguments("\":") }])
Some([IntermediateToolCallContent { function: Arguments(" \"") }])
Some([IntermediateToolCallContent { function: Arguments("7") }])
Some([IntermediateToolCallContent { function: Arguments("\"}") }])
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce:
1. Run `vllm serve Qwen/Qwen2.5-7B-Instruct --tool-call-parser hermes --enable-auto-tool-choice`
2. use the attached curl to hit the vllm APIs.
### Expected behavior
The expected behaviour is that all required arguments should be predicted even with properties defined with type number | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39767/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39766/comments | https://api.github.com/repos/huggingface/transformers/issues/39766/events | https://github.com/huggingface/transformers/pull/39766 | 3,274,834,920 | PR_kwDOCUB6oc6hPerL | 39,766 | Fix OmDet test after arg deprecation | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T20:05:54 | 2025-07-29T20:18:54 | 2025-07-29T20:10:36 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39766",
"html_url": "https://github.com/huggingface/transformers/pull/39766",
"diff_url": "https://github.com/huggingface/transformers/pull/39766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39766.patch",
"merged_at": "2025-07-29T20:10:36"
} | # What does this PR do?
As per the title! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39766/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39765/comments | https://api.github.com/repos/huggingface/transformers/issues/39765/events | https://github.com/huggingface/transformers/pull/39765 | 3,274,717,991 | PR_kwDOCUB6oc6hPFTA | 39,765 | 🚨 Always return Cache objects in modelings (to align with generate) | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T19:21:40 | 2025-08-18T14:53:48 | 2025-08-18T14:26:36 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39765",
"html_url": "https://github.com/huggingface/transformers/pull/39765",
"diff_url": "https://github.com/huggingface/transformers/pull/39765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39765.patch",
"merged_at": "2025-08-18T14:26:36"
} | This PR removes reliance on `Cache.from_legacy_cache(past_key_values)` for initializing `None` `past_key_values`, replacing it with explicit cache initialization. The previous approach also set `return_legacy_cache=True`, unintentionally returning legacy tuples and masking other issues.
This change is necessary to support the upcoming deprecation of `from_legacy_cache` in v4.58.
Note: This update revealed an issue in `pipelines`, where `loader_batch_item` expects legacy tuples when iterating over `ModelOutputs`. It failed when encountering `Cache` objects.
| {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39765/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39764/comments | https://api.github.com/repos/huggingface/transformers/issues/39764/events | https://github.com/huggingface/transformers/pull/39764 | 3,274,496,094 | PR_kwDOCUB6oc6hOWAE | 39,764 | Improve Gemma3n model and tests | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T17:58:29 | 2025-08-28T18:25:42 | 2025-08-28T18:25:42 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39764",
"html_url": "https://github.com/huggingface/transformers/pull/39764",
"diff_url": "https://github.com/huggingface/transformers/pull/39764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39764.patch",
"merged_at": "2025-08-28T18:25:42"
} | Improves the Gemma3n model and tests by:
- Remove hardcoded number of layers in the activation sparsity init.
- Better explanation for layer reuse.
- Enable and update integration tests.
- Removing unused pan and scan configuration options from ImageProcessor.
- Skipping some incompatible tests. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39764/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39763/comments | https://api.github.com/repos/huggingface/transformers/issues/39763/events | https://github.com/huggingface/transformers/issues/39763 | 3,274,159,127 | I_kwDOCUB6oc7DJ7QX | 39,763 | Instantiating `google/gemma-3-4b-pt` with AutoModelForSequenceClassification Reports Unitialized Model | {
"login": "pks",
"id": 368183,
"node_id": "MDQ6VXNlcjM2ODE4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/368183?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pks",
"html_url": "https://github.com/pks",
"followers_url": "https://api.github.com/users/pks/followers",
"following_url": "https://api.github.com/users/pks/following{/other_user}",
"gists_url": "https://api.github.com/users/pks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pks/subscriptions",
"organizations_url": "https://api.github.com/users/pks/orgs",
"repos_url": "https://api.github.com/users/pks/repos",
"events_url": "https://api.github.com/users/pks/events{/privacy}",
"received_events_url": "https://api.github.com/users/pks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T15:56:18 | 2025-08-11T07:21:14 | 2025-08-11T07:21:14 | NONE | null | null | null | null | ### System Info
I'm trying to use a Gemma3 model (non-instruction tuned) for a classification task. I was glad that I saw that the model seems to be supported in the current code for this task: https://github.com/huggingface/transformers/pull/39465
When trying
```
model = transformers.AutoModelForSequenceClassification.from_pretrained("google/gemma-3-4b-pt")
```
it essentially reports the model as being uninitialized (it lists both the `vision` and `language_model` weights), which is unexpected:
```
[...], 'model.vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.weight', 'model.vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.bias', 'model.vision_
tower.vision_model.encoder.layers.9.self_attn.q_proj.weight', 'model.vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.bias', 'model.vision_tower.vision_model.enc
oder.layers.9.self_attn.v_proj.weight', 'model.vision_tower.vision_model.post_layernorm.bias', 'model.vision_tower.vision_model.post_layernorm.weight', 'score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
My `transformers env`:
- `transformers` version: 4.55.0.dev0
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA RTX A6000
I installed the current `HEAD` (abf101af) of the transformer repo via `uv`.
### Who can help?
@zucchini-nlp
@ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model = transformers.AutoModelForSequenceClassification.from_pretrained("google/gemma-3-4b-pt")
```
### Expected behavior
Loaded model with initialized weights. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39763/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39762/comments | https://api.github.com/repos/huggingface/transformers/issues/39762/events | https://github.com/huggingface/transformers/pull/39762 | 3,274,011,530 | PR_kwDOCUB6oc6hMsVO | 39,762 | Fix an invalid condition | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T15:08:31 | 2025-07-30T12:29:18 | 2025-07-30T12:19:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39762",
"html_url": "https://github.com/huggingface/transformers/pull/39762",
"diff_url": "https://github.com/huggingface/transformers/pull/39762.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39762.patch",
"merged_at": "2025-07-30T12:19:17"
} | # What does this PR do?
This was detected by ruff. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39762/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39761/comments | https://api.github.com/repos/huggingface/transformers/issues/39761/events | https://github.com/huggingface/transformers/pull/39761 | 3,273,929,362 | PR_kwDOCUB6oc6hMaek | 39,761 | add `libcst` to `extras["testing"]` in `setup.py` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T14:43:37 | 2025-07-29T14:58:54 | 2025-07-29T14:58:52 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39761",
"html_url": "https://github.com/huggingface/transformers/pull/39761",
"diff_url": "https://github.com/huggingface/transformers/pull/39761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39761.patch",
"merged_at": "2025-07-29T14:58:51"
} | # What does this PR do?
So tests like `tests/commands/test_serving.py` and `tests/utils/test_add_new_model_like.py` in the job `tests_non_model` could run.
This job uses `"huggingface/transformers-torch-light` which has `testing` in the docker file.
(see https://app.circleci.com/pipelines/github/huggingface/transformers/139901/workflows/3cbe7abf-c504-458c-85b8-d7de51ded579/jobs/1853203/tests)
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39761/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39760/comments | https://api.github.com/repos/huggingface/transformers/issues/39760/events | https://github.com/huggingface/transformers/pull/39760 | 3,273,924,678 | PR_kwDOCUB6oc6hMZeP | 39,760 | [Draft] Add Llasa TTS family of models | {
"login": "ebezzam",
"id": 4757445,
"node_id": "MDQ6VXNlcjQ3NTc0NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4757445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebezzam",
"html_url": "https://github.com/ebezzam",
"followers_url": "https://api.github.com/users/ebezzam/followers",
"following_url": "https://api.github.com/users/ebezzam/following{/other_user}",
"gists_url": "https://api.github.com/users/ebezzam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebezzam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebezzam/subscriptions",
"organizations_url": "https://api.github.com/users/ebezzam/orgs",
"repos_url": "https://api.github.com/users/ebezzam/repos",
"events_url": "https://api.github.com/users/ebezzam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebezzam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-07-29T14:42:04 | 2025-08-16T09:07:01 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39760",
"html_url": "https://github.com/huggingface/transformers/pull/39760",
"diff_url": "https://github.com/huggingface/transformers/pull/39760.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39760.patch",
"merged_at": null
} | # What does this PR do?
This PR adds the Llasa TTS family of models:
- 1B: https://huggingface.co/HKUSTAudio/Llasa-1B
- 3B: https://huggingface.co/HKUSTAudio/Llasa-3B
- 8B: https://huggingface.co/HKUSTAudio/Llasa-8B
Reproducers for integration tests: https://gist.github.com/ebezzam/1863ec8eb7ec4afff02c26bdcb7691f9
TODO
- [ ] Batch integration tests
- [ ] Tokenizer and processing tests like [Dia](https://github.com/huggingface/transformers/tree/main/tests/models/dia)?
- [ ] Create public model cards (update text and add relevant tags and labels). Atm under my account ([1B](https://huggingface.co/bezzam/Llasa-1B), [3B](https://huggingface.co/bezzam/Llasa-3B), [8B](https://huggingface.co/bezzam/Llasa-8B)).
- [ ] Integrate with XCodec2 (Transformer version) when https://github.com/huggingface/transformers/pull/37868 merged
---
# Example usage
Below is example usage with my Hub checkpoint (compared to that of [original authors](https://huggingface.co/HKUSTAudio/Llasa-1B#how-to-use))
```python
"""
pip install torchao xcodec2==0.1.3
"""
import torch
from transformers import LlasaTokenizer, LlasaForCausalLM, LlasaProcessor
import soundfile as sf
from xcodec2.modeling_xcodec2 import XCodec2Model
model_repo = "bezzam/Llasa-1B"
# model_repo = "bezzam/Llasa-3B"
# model_repo = "bezzam/Llasa-8B"
torch_device = "cuda" if torch.cuda.is_available() else "cpu"
# load processor (tokenizer + audio codec)
processor = LlasaProcessor(
LlasaTokenizer.from_pretrained(model_repo),
XCodec2Model.from_pretrained("HKUSTAudio/xcodec2").eval().to(torch_device)
)
# # -- use below when `XCodec2Model` integrated into `transformers`
# processor = LlasaProcessor.from_pretrained(model_repo)
# load model
model = LlasaForCausalLM.from_pretrained(model_repo)
model.eval().to(torch_device)
# TTS, some text inputs don't work which shows limitations of this approach
input_text = "How much wood would a woodchuck chuck if a woodchuck could chuck speech tokens?"
with torch.no_grad():
# Tokenize the text
encoded_text = processor(input_text).to(torch_device)
# Generate the speech autoregressively
outputs = model.generate(
encoded_text["input_ids"],
do_sample=False,
max_length=600, # generates up to ~10s. Max allowed length is 2048, as Llasa was trained with max length 2048
top_p=1, # Adjusts the diversity of generated content
temperature=0.8, # Controls randomness in output
)
# decode to audio
gen_wav = processor.decode(outputs, input_offset=encoded_text["input_offset"])
fn = f"gen_{model_repo.split('/')[-1]}.wav"
sf.write(fn, gen_wav.cpu().numpy(), model.config.sampling_rate)
print(f"Generated speech saved to {fn}")
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39760/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39759/comments | https://api.github.com/repos/huggingface/transformers/issues/39759/events | https://github.com/huggingface/transformers/pull/39759 | 3,273,753,333 | PR_kwDOCUB6oc6hLz0H | 39,759 | Fix version issue in modeling_utils.py | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-29T13:55:43 | 2025-07-29T14:15:32 | 2025-07-29T14:15:30 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39759",
"html_url": "https://github.com/huggingface/transformers/pull/39759",
"diff_url": "https://github.com/huggingface/transformers/pull/39759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39759.patch",
"merged_at": "2025-07-29T14:15:30"
} | # What does this PR do?
As per the title. `nn.RMSNorm` was added in torch 2.4, which created friction when calling the function. This fixes it by simply removing it from the `isinstance` check, as the name test is enough anyway and already covers it | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39759/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39758/comments | https://api.github.com/repos/huggingface/transformers/issues/39758/events | https://github.com/huggingface/transformers/pull/39758 | 3,273,643,623 | PR_kwDOCUB6oc6hLb3B | 39,758 | Avoid OOM when other tests are failing | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T13:22:47 | 2025-07-29T13:36:07 | 2025-07-29T13:35:45 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39758",
"html_url": "https://github.com/huggingface/transformers/pull/39758",
"diff_url": "https://github.com/huggingface/transformers/pull/39758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39758.patch",
"merged_at": "2025-07-29T13:35:45"
} | # What does this PR do?
Avoid OOM when other tests are failing | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39758/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39757/comments | https://api.github.com/repos/huggingface/transformers/issues/39757/events | https://github.com/huggingface/transformers/pull/39757 | 3,273,531,796 | PR_kwDOCUB6oc6hLDaL | 39,757 | AMD disable torchcodec | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T12:51:26 | 2025-07-29T13:07:57 | 2025-07-29T13:07:25 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39757",
"html_url": "https://github.com/huggingface/transformers/pull/39757",
"diff_url": "https://github.com/huggingface/transformers/pull/39757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39757.patch",
"merged_at": "2025-07-29T13:07:25"
} | Temporarily disable `torchcodec` in AMD docker image which was introduced in #39669 because of strange segfault error. | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39757/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39756/comments | https://api.github.com/repos/huggingface/transformers/issues/39756/events | https://github.com/huggingface/transformers/pull/39756 | 3,273,490,758 | PR_kwDOCUB6oc6hK6bf | 39,756 | Fix rope_deltas corruption in Qwen2.5VL during CFG generation | {
"login": "notkisk",
"id": 107971634,
"node_id": "U_kgDOBm-EMg",
"avatar_url": "https://avatars.githubusercontent.com/u/107971634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notkisk",
"html_url": "https://github.com/notkisk",
"followers_url": "https://api.github.com/users/notkisk/followers",
"following_url": "https://api.github.com/users/notkisk/following{/other_user}",
"gists_url": "https://api.github.com/users/notkisk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notkisk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notkisk/subscriptions",
"organizations_url": "https://api.github.com/users/notkisk/orgs",
"repos_url": "https://api.github.com/users/notkisk/repos",
"events_url": "https://api.github.com/users/notkisk/events{/privacy}",
"received_events_url": "https://api.github.com/users/notkisk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-29T12:40:02 | 2025-08-04T13:32:25 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39756",
"html_url": "https://github.com/huggingface/transformers/pull/39756",
"diff_url": "https://github.com/huggingface/transformers/pull/39756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39756.patch",
"merged_at": null
} | ## Summary
Fixes #39749
This PR addresses a critical issue where the `rope_deltas` attribute in Qwen2.5VL models gets corrupted during Classifier-Free Guidance (CFG) generation due to shared mutable state between forward passes.
**Problem**: During CFG generation, the model performs two forward passes (conditional and unconditional). The `rope_deltas` state was being modified during the first pass and incorrectly reused in the second pass, leading to position embedding corruption and incorrect generation
results.
**Solution**: Implemented proper state management by:
- Adding `_update_model_kwargs_for_generation` method to preserve `rope_deltas` in model kwargs
- Modifying `prepare_inputs_for_generation` to restore `rope_deltas` from kwargs when available
- Ensuring each forward pass gets the correct `rope_deltas` state
**Impact**: Fixes CFG generation for Qwen2.5VL models, ensuring correct position calculations and generation quality.
## Changes Made
- Modified `src/transformers/models/qwen2_5_vl/modular_qwen2_5_vl.py` with the fix
- Regenerated `src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py` from the modular source
- Added comprehensive state management for `rope_deltas` during generation
## Test Plan
- [x] Verified the fix addresses the original issue reproduction case
- [x] Tested CFG generation with multiple forward passes
- [x] Confirmed `rope_deltas` state is properly preserved and restored
- [x] Ran quality checks and style formatting
- [x] Ensured backward compatibility with existing functionality
The fix is ready and properly references issue #39749 as requested. The branch contains all the necessary changes to resolve the rope_deltas corruption issue during CFG generation. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39756/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39755/comments | https://api.github.com/repos/huggingface/transformers/issues/39755/events | https://github.com/huggingface/transformers/issues/39755 | 3,273,441,925 | I_kwDOCUB6oc7DHMKF | 39,755 | Follow-up on Issues Regarding Training State Restoration from Interruptions | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T12:26:20 | 2025-09-28T08:02:56 | 2025-09-28T08:02:56 | CONTRIBUTOR | null | null | null | null | Hi team,
I would like to follow up on the status of the following issues. Both of these issues involve erroneous behavior that occurs when resuming from an interruption . One issue is that regardless of when training is interrupted at any given timestep, in most cases, a certain amount of data will be un-trained (https://github.com/huggingface/transformers/issues/38939). The other issue is that the random state cannot be guaranteed to be consistent when resuming from an interruption, which may affect random operations in the random sampler or collator, thus breaking consistency with a full training run (https://github.com/huggingface/transformers/issues/39215).
I have provided minimal reproducible code, a detailed description of the problem, and a possible set of fixes in the issue descriptions. However, I have not received any further response.
If you believe this direction for a fix is correct, I would be very happy to create PRs to contribute these fixes.
I hope to get some feedback on whether this solution is feasible. Thank you for your time and excellent work on this project | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39755/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39754/comments | https://api.github.com/repos/huggingface/transformers/issues/39754/events | https://github.com/huggingface/transformers/pull/39754 | 3,273,378,553 | PR_kwDOCUB6oc6hKhfC | 39,754 | Fix GPT2 with cross attention | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-29T12:07:46 | 2025-07-29T13:40:31 | 2025-07-29T13:40:31 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39754",
"html_url": "https://github.com/huggingface/transformers/pull/39754",
"diff_url": "https://github.com/huggingface/transformers/pull/39754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39754.patch",
"merged_at": "2025-07-29T13:40:31"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39746 | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39754/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39753/comments | https://api.github.com/repos/huggingface/transformers/issues/39753/events | https://github.com/huggingface/transformers/issues/39753 | 3,273,357,891 | I_kwDOCUB6oc7DG3pD | 39,753 | Inv frequency has not default, going against our philosophy | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | null | [] | 2025-07-29T12:00:30 | 2025-10-06T04:41:31 | null | COLLABORATOR | null | null | null | null | This is long due, but the RotaryEmbedding's `default` path should be explicit, and if the rope type is not default, only then do we introduce redirection. It could even be in the decorator itself with a "post_init" update of the inv freq!
https://github.com/huggingface/transformers/blob/95faabf0a6cd845f4c5548697e288a79e424b096/src/transformers/models/llama/modeling_llama.py#L83-L86 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39753/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39753/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39752/comments | https://api.github.com/repos/huggingface/transformers/issues/39752/events | https://github.com/huggingface/transformers/pull/39752 | 3,272,975,956 | PR_kwDOCUB6oc6hJKJ1 | 39,752 | Use `--gpus all` in workflow files | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T09:55:08 | 2025-07-29T12:53:35 | 2025-07-29T12:53:33 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39752",
"html_url": "https://github.com/huggingface/transformers/pull/39752",
"diff_url": "https://github.com/huggingface/transformers/pull/39752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39752.patch",
"merged_at": "2025-07-29T12:53:33"
} | # What does this PR do?
As discussed with infra team | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39752/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39751/comments | https://api.github.com/repos/huggingface/transformers/issues/39751/events | https://github.com/huggingface/transformers/pull/39751 | 3,272,972,354 | PR_kwDOCUB6oc6hJJWL | 39,751 | 🌐 [i18n-KO] Translated `text-to-speech.md` to Korean | {
"login": "taemincode",
"id": 187865781,
"node_id": "U_kgDOCzKatQ",
"avatar_url": "https://avatars.githubusercontent.com/u/187865781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taemincode",
"html_url": "https://github.com/taemincode",
"followers_url": "https://api.github.com/users/taemincode/followers",
"following_url": "https://api.github.com/users/taemincode/following{/other_user}",
"gists_url": "https://api.github.com/users/taemincode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taemincode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taemincode/subscriptions",
"organizations_url": "https://api.github.com/users/taemincode/orgs",
"repos_url": "https://api.github.com/users/taemincode/repos",
"events_url": "https://api.github.com/users/taemincode/events{/privacy}",
"received_events_url": "https://api.github.com/users/taemincode/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-29T09:54:11 | 2025-07-29T09:54:11 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39751",
"html_url": "https://github.com/huggingface/transformers/pull/39751",
"diff_url": "https://github.com/huggingface/transformers/pull/39751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39751.patch",
"merged_at": null
} | <!-- PR의 제목은 "🌐 [i18n-KO] Translated `<your_file>.md` to Korean" 으로 부탁드립니다 -->
# What does this PR do?
Translated the `<your_file>.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 KREW 팀원들에게 리뷰를 요청하는 아래 주석을 노출해주세요!-->
May you please review this PR?
<!-- @jungnerd, @luckyvickyricky, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S -->
<!-- @harheem, @nsbg, @Youngdong2, @xhaktm00, @ssunbear, @ChoHyoungSeo, @judy-choi -->
<!-- @4N3MONE, @Kim-Ju-won, @ahnjj, @FacerAin, @ssum21, @TaskerJang, @HyunZ118 -->
<!-- @yijun-lee, @songi104, @chhaewxn, @AhnJoonSung, @jihyun-0611, @seopp, @pyapyapya -->
@yijun-lee, @harheem, @4N3MONE, @jungnerd
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
<!-- @stevhliu May you please review this PR? --> | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39751/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39750/comments | https://api.github.com/repos/huggingface/transformers/issues/39750/events | https://github.com/huggingface/transformers/pull/39750 | 3,272,903,390 | PR_kwDOCUB6oc6hI6FX | 39,750 | [modenbert] fix regression | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-29T09:36:12 | 2025-07-29T18:29:53 | 2025-07-29T14:58:59 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39750",
"html_url": "https://github.com/huggingface/transformers/pull/39750",
"diff_url": "https://github.com/huggingface/transformers/pull/39750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39750.patch",
"merged_at": "2025-07-29T14:58:59"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39747
Two issues:
- The attention setting fn apparently was renamed afetrwards, but ModernBert had it overwritten and started to fail.
- The RoPE theta is different for global and local attentions, and is already being updated in https://github.com/huggingface/transformers/pull/39397. However it is taking more time because we want to update all models, so this PR monkey patches `config.rope_theta` similar to what we have in Gemma3
Slow tests ✅ | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39750/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39750/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39749/comments | https://api.github.com/repos/huggingface/transformers/issues/39749/events | https://github.com/huggingface/transformers/issues/39749 | 3,272,891,722 | I_kwDOCUB6oc7DFF1K | 39,749 | Qwen2_5_VLForConditionalGeneration cfg forward twice error | {
"login": "guozhiyao",
"id": 21999339,
"node_id": "MDQ6VXNlcjIxOTk5MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/21999339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guozhiyao",
"html_url": "https://github.com/guozhiyao",
"followers_url": "https://api.github.com/users/guozhiyao/followers",
"following_url": "https://api.github.com/users/guozhiyao/following{/other_user}",
"gists_url": "https://api.github.com/users/guozhiyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guozhiyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guozhiyao/subscriptions",
"organizations_url": "https://api.github.com/users/guozhiyao/orgs",
"repos_url": "https://api.github.com/users/guozhiyao/repos",
"events_url": "https://api.github.com/users/guozhiyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/guozhiyao/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T09:33:08 | 2025-09-07T08:02:53 | 2025-09-07T08:02:53 | NONE | null | null | null | null | ### System Info
transformers 4.49.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
accepted_tokens = torch.zeros(batch_size, 0, dtype=torch.long, device=input_ids.device)
attention_mask = input_ids.ne(self.config.t5_pad_token_id)
pos_kwargs = dict(
inputs_embeds=inputs_emb, attention_mask=attention_mask,
use_cache=True
)
pos_kwargs = self._get_initial_cache_position(None, pos_kwargs)
attention_mask = negative_input_ids.ne(self.config.t5_pad_token_id)
neg_kwargs = dict(
inputs_embeds=negative_emb, attention_mask=attention_mask,
use_cache=True
)
neg_kwargs = self._get_initial_cache_position(None, neg_kwargs)
first_token = None
for it in range(max_steps):
pos_kwargs["input_ids"] = first_token
pos_kwargs = self.prepare_inputs_for_generation(**pos_kwargs)
output = self(**pos_kwargs)
logits = output.logits
pos_kwargs = self._update_model_kwargs_for_generation(output, pos_kwargs)
cond_draft_logits = logits[:, -1:, :]
if cfg_scale > 1.0 and negative_input_ids is not None:
neg_kwargs["input_ids"] = first_token
neg_kwargs = self.prepare_inputs_for_generation(**neg_kwargs)
output = self(**neg_kwargs)
uncond_logits = output.logits
neg_kwargs = self._update_model_kwargs_for_generation(output, neg_kwargs)
uncond_draft_logits = uncond_logits[:, -1:, :]
draft_logits = uncond_draft_logits + cfg_scale * (cond_draft_logits - uncond_draft_logits)
else:
draft_logits = cond_draft_logits
draft_logits /= temperature
draft_probs = F.softmax(draft_logits, dim=-1, dtype=torch.float32)
draft_tokens = torch.argmax(draft_probs, dim=-1)
first_token = draft_tokens[:, :1]
accepted_tokens = torch.cat([accepted_tokens, first_token], dim=1)
neg_kwargs["inputs_embeds"] = None
pos_kwargs["inputs_embeds"] = None
# 10. Final Output --------------------------------------------------------
output = accepted_tokens[:, :max_steps]
```
### Expected behavior
I use the qwen2vl to do the cfg generation. But the qwen has `self.rope_deltas`, which will be modified by the second forwrad.
I modify the qwen code with
1. add the `_update_model_kwargs_for_generation` to save the `rope_deltas` to `model_kwargs`.
```
def _update_model_kwargs_for_generation(
self,
outputs: ModelOutput,
model_kwargs: Dict[str, Any],
is_encoder_decoder: bool = False,
num_new_tokens: int = 1,
) -> Dict[str, Any]:
model_kwargs["rope_deltas"] = self.rope_deltas
return super()._update_model_kwargs_for_generation(outputs, model_kwargs, is_encoder_decoder, num_new_tokens)
```
2. in `prepare_inputs_for_generation`, I will get the `rope_deltas` from `model_kwargs`.
`self.rope_deltas = kwargs.get("rope_deltas", None)`
And the bug is fixed. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39749/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39748/comments | https://api.github.com/repos/huggingface/transformers/issues/39748/events | https://github.com/huggingface/transformers/pull/39748 | 3,272,795,418 | PR_kwDOCUB6oc6hIiV3 | 39,748 | fix cache inheritance | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-29T09:08:49 | 2025-07-29T09:24:46 | 2025-07-29T09:24:44 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39748",
"html_url": "https://github.com/huggingface/transformers/pull/39748",
"diff_url": "https://github.com/huggingface/transformers/pull/39748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39748.patch",
"merged_at": "2025-07-29T09:24:44"
} | # What does this PR do?
Paged attention was broken because it is not layered yet! Needs to be in the patch! | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39748/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39747/comments | https://api.github.com/repos/huggingface/transformers/issues/39747/events | https://github.com/huggingface/transformers/issues/39747 | 3,272,657,693 | I_kwDOCUB6oc7DEMsd | 39,747 | ModernBERT has been totally destroyed by PR #38974 and #38838 | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T08:27:46 | 2025-07-29T14:59:00 | 2025-07-29T14:59:00 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers 4.54.0
### Who can help?
@zucchini-nlp @ArthurZucker
### details
Regarding PR #38974, it modifies the model's default attention initialization behavior.
Referring to the ModernBERT code:
https://github.com/huggingface/transformers/blob/75794792ad6f23f09729674bc97a8338085f22b2/src/transformers/models/modernbert/modular_modernbert.py#L814-L832
The intention is to set the attention implementation to "FA2" if it is available and the user has not specified one.
The current situation is that this function is not even being called. Here is a simple script to reproduce it:
```python
from transformers import ModernBertConfig,ModernBertForMaskedLM
config = ModernBertConfig()
model = ModernBertForMaskedLM(config)
print(model.config._attn_implementation)
# sdpa
print(model._flash_attn_2_can_dispatch())
# You are attempting to use Flash Attention 2 without specifying a torch dtype. This might lead to unexpected behaviour
# You are attempting to use Flash Attention 2 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
# True
```
This behavior subtly degrades the experience for users of the ModernBERT model. In the previous version, if the user did not specify the attention mechanism, ModernBERT would automatically use FA2 attention (if available). Now, they might get a significant performance degradation just by simply updating their transformers version, because the ModernBERT team also carefully designed an unpadding process for the FA2 path.
Furthermore, modernbert is not a model that will function correctly simply by posteriorly setting model.config._attn_implementation, because its components, such as rotary embedding, can even differ depending on the attention implementation. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39747/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39746/comments | https://api.github.com/repos/huggingface/transformers/issues/39746/events | https://github.com/huggingface/transformers/issues/39746 | 3,272,602,205 | I_kwDOCUB6oc7DD_Jd | 39,746 | encoder decoder model compile failed after refactor cache | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T08:11:06 | 2025-07-29T13:40:32 | 2025-07-29T13:40:32 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.55.0.dev0
- Platform: Linux-6.11.0-28-generic-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.34.2
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.0.dev20250714+cpu (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@zucchini-nlp @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import time
import requests
import torch
import PIL.Image
from transformers import pipeline
model_id = "nlpconnect/vit-gpt2-image-captioning"
image_to_text = pipeline("image-to-text", model=model_id, device="cpu", torch_dtype=torch.float16)
image_url = "https://ankur3107.github.io/assets/images/image-captioning-example.png"
image = PIL.Image.open(requests.get(image_url, stream=True, timeout=3000).raw)
for _ in range(10):
output = image_to_text(image)
start = time.time()
output = image_to_text(image)
end = time.time()
print(f"eager mode pipeline latency {end - start}")
image_to_text.model.forward = torch.compile(image_to_text.model.forward)
for _ in range(10):
output = image_to_text(image)
start = time.time()
output = image_to_text(image)
end = time.time()
print(f"compile mode pipeline latency {end - start}")
```
error log:
```
torch._dynamo.exc.TorchRuntimeError: Dynamo failed to run FX node with fake tensors: call_function <built-in function scaled_dot_pr
oduct_attention>(*(FakeTensor(..., size=(1, 12, 1, 64), dtype=torch.float16), FakeTensor(..., size=(1, 12, 394, 64), dtype=torch.fl
oat16), FakeTensor(..., size=(1, 12, 394, 64), dtype=torch.float16)), **{'attn_mask': FakeTensor(..., size=(1, 1, 1, 197), dtype=to
rch.float16), 'dropout_p': 0.0, 'scale': None, 'is_causal': False}): got RuntimeError('Attempting to broadcast a dimension of lengt
h 197 at -1! Mismatching argument at index 1 had torch.Size([1, 1, 1, 197]); but expected shape should be broadcastable to [1, 12,
1, 394]')
```
### Expected behavior
Before the PR [38635](https://github.com/huggingface/transformers/pull/38635), the script runs well and can get 1.5x speed-up. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39746/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39746/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39745/comments | https://api.github.com/repos/huggingface/transformers/issues/39745/events | https://github.com/huggingface/transformers/pull/39745 | 3,272,016,117 | PR_kwDOCUB6oc6hF3WM | 39,745 | [Fix] import two missing typos in `models/__init__.py` for typo checking | {
"login": "hebangwen",
"id": 32662175,
"node_id": "MDQ6VXNlcjMyNjYyMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/32662175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hebangwen",
"html_url": "https://github.com/hebangwen",
"followers_url": "https://api.github.com/users/hebangwen/followers",
"following_url": "https://api.github.com/users/hebangwen/following{/other_user}",
"gists_url": "https://api.github.com/users/hebangwen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hebangwen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hebangwen/subscriptions",
"organizations_url": "https://api.github.com/users/hebangwen/orgs",
"repos_url": "https://api.github.com/users/hebangwen/repos",
"events_url": "https://api.github.com/users/hebangwen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hebangwen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T04:31:26 | 2025-07-29T09:35:22 | 2025-07-29T09:35:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39745",
"html_url": "https://github.com/huggingface/transformers/pull/39745",
"diff_url": "https://github.com/huggingface/transformers/pull/39745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39745.patch",
"merged_at": "2025-07-29T09:35:22"
} | # What does this PR do?
I find Pylance in VSCode cannot import the `Gemma3NForConditionalGeneration` and `Qwen2_5OmniModel` automatically. The reason is that these symbols are not exported for type checking. When running in lazy import mode, this symbols will not be exported and thus no type hint.
<img width="741" height="85" alt="image" src="https://github.com/user-attachments/assets/e3901a31-d101-40a1-bc81-24cea04b0e49" />
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39745/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39744/comments | https://api.github.com/repos/huggingface/transformers/issues/39744/events | https://github.com/huggingface/transformers/issues/39744 | 3,271,756,636 | I_kwDOCUB6oc7DAwtc | 39,744 | _supports_static_cache disappear | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-29T02:36:04 | 2025-07-29T08:17:00 | 2025-07-29T08:17:00 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers main branch
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I see the attr `_supports_static_cache` disappeared in the model. I used to check if `model._supports_static_cache` before setting `cache_implementation=True`. For now, can I assume all models support static cache?
### Expected behavior
All models support static cache as `_supports_static_cache` is deprecated. Or do we have other method to check if the model support static cache? | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39744/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39743/comments | https://api.github.com/repos/huggingface/transformers/issues/39743/events | https://github.com/huggingface/transformers/pull/39743 | 3,271,661,591 | PR_kwDOCUB6oc6hErZb | 39,743 | Audio encodings now match conv2d weight dtype in Gemma3nAudioSSCPConvBlock | {
"login": "Malav-P",
"id": 96792879,
"node_id": "U_kgDOBcTxLw",
"avatar_url": "https://avatars.githubusercontent.com/u/96792879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Malav-P",
"html_url": "https://github.com/Malav-P",
"followers_url": "https://api.github.com/users/Malav-P/followers",
"following_url": "https://api.github.com/users/Malav-P/following{/other_user}",
"gists_url": "https://api.github.com/users/Malav-P/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Malav-P/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Malav-P/subscriptions",
"organizations_url": "https://api.github.com/users/Malav-P/orgs",
"repos_url": "https://api.github.com/users/Malav-P/repos",
"events_url": "https://api.github.com/users/Malav-P/events{/privacy}",
"received_events_url": "https://api.github.com/users/Malav-P/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-29T01:31:30 | 2025-08-12T17:50:45 | 2025-08-12T12:08:28 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39743",
"html_url": "https://github.com/huggingface/transformers/pull/39743",
"diff_url": "https://github.com/huggingface/transformers/pull/39743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39743.patch",
"merged_at": "2025-08-12T12:08:28"
} | # What does this PR do?
The change ensures that the audio encodings are cast to the same type as `conv.weight` tensor. This is done by appending `to(self.conv.weight.dtype)` to the `audio_encodings_padded` rvalue.
Fixes an issue where the conv2d forward pass throws an error. Code for reproducible error (run on Mac M1) :
```python
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="google/gemma-3n-e2b-it",
device=0,
torch_dtype=torch.bfloat16,
cache_implementation="static"
)
messages = [
{
"role": "user",
"content": [
{"type": "audio", "audio": "5676.wav"},
{"type": "text", "text": "Transcribe this audio file."}
]
}
]
output = pipe(text=messages, max_new_tokens=200, torch_dtype=torch.bfloat16)
print(output[0]["generated_text"][-1]["content"])
```
Full stack trace is attached [error.txt](https://github.com/user-attachments/files/21478915/error.txt).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Suggested Reviewers
@ArthurZucker
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39743/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39742/comments | https://api.github.com/repos/huggingface/transformers/issues/39742/events | https://github.com/huggingface/transformers/pull/39742 | 3,271,645,790 | PR_kwDOCUB6oc6hEoGj | 39,742 | Update HuBERT model card according to template | {
"login": "reedrya",
"id": 157441470,
"node_id": "U_kgDOCWJdvg",
"avatar_url": "https://avatars.githubusercontent.com/u/157441470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reedrya",
"html_url": "https://github.com/reedrya",
"followers_url": "https://api.github.com/users/reedrya/followers",
"following_url": "https://api.github.com/users/reedrya/following{/other_user}",
"gists_url": "https://api.github.com/users/reedrya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reedrya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reedrya/subscriptions",
"organizations_url": "https://api.github.com/users/reedrya/orgs",
"repos_url": "https://api.github.com/users/reedrya/repos",
"events_url": "https://api.github.com/users/reedrya/events{/privacy}",
"received_events_url": "https://api.github.com/users/reedrya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-29T01:17:22 | 2025-08-10T18:32:46 | 2025-08-10T18:32:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39742",
"html_url": "https://github.com/huggingface/transformers/pull/39742",
"diff_url": "https://github.com/huggingface/transformers/pull/39742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39742.patch",
"merged_at": "2025-08-10T18:32:45"
} | # What does this PR do?
This PR updates the HuBERT model card to comply with the format introduced in #36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
## Notes
- I did not include the **AttentionMaskVisualizer** section since I'm unfamiliar. Please advise if that should be added.
- I preserved the **Flash Attention 2** section from the original model card, since it appears to be relevant and informative. Let me know if you'd prefer it removed for template consistency. | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39742/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39741/comments | https://api.github.com/repos/huggingface/transformers/issues/39741/events | https://github.com/huggingface/transformers/pull/39741 | 3,271,519,598 | PR_kwDOCUB6oc6hENd2 | 39,741 | Fix HfArgumentParser to filter out dict types from Union | {
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-28T23:25:07 | 2025-08-05T12:27:13 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39741",
"html_url": "https://github.com/huggingface/transformers/pull/39741",
"diff_url": "https://github.com/huggingface/transformers/pull/39741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39741.patch",
"merged_at": null
} | # What does this PR do?
This PR updates `HfArgumentParser` to filter out dict types from `Union` annotations. This prevents runtime errors with argparse, which does not support parsing arguments as dicts. Previous work (PR #39467) reordered `Union[str, dict]` to `Union[dict, str]` throughout the codebase, but this approach is fragile and could break if new `Union[str, dict]` annotations are introduced. This change ensures that dict types are always filtered out, making the parser more robust and maintainable.
Fixes https://github.com/huggingface/transformers/issues/39462
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @SunMarc
- @qgallouedec
(same as previous PR) | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39741/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39740/comments | https://api.github.com/repos/huggingface/transformers/issues/39740/events | https://github.com/huggingface/transformers/pull/39740 | 3,271,128,166 | PR_kwDOCUB6oc6hC5ev | 39,740 | [Tests] [Bugfix] Make weights tied for `dynamic_tied_weights` test | {
"login": "kylesayrs",
"id": 17103692,
"node_id": "MDQ6VXNlcjE3MTAzNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17103692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylesayrs",
"html_url": "https://github.com/kylesayrs",
"followers_url": "https://api.github.com/users/kylesayrs/followers",
"following_url": "https://api.github.com/users/kylesayrs/following{/other_user}",
"gists_url": "https://api.github.com/users/kylesayrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylesayrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylesayrs/subscriptions",
"organizations_url": "https://api.github.com/users/kylesayrs/orgs",
"repos_url": "https://api.github.com/users/kylesayrs/repos",
"events_url": "https://api.github.com/users/kylesayrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylesayrs/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-28T19:40:18 | 2025-08-05T18:50:12 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39740",
"html_url": "https://github.com/huggingface/transformers/pull/39740",
"diff_url": "https://github.com/huggingface/transformers/pull/39740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39740.patch",
"merged_at": null
} | ## Background ##
* When I added this test, I neglected to actually tie the weights
## Purpose ##
* The purpose of `test_save_offloaded_model_dynamic_tied_weights_keys` is to test the case when tied weights are dynamically added. While the test achieved its original goal of testing https://github.com/huggingface/transformers/pull/39263, it should also test that saving still works if the weights are actually tied
* Mostly a typo fix | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39740/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39739/comments | https://api.github.com/repos/huggingface/transformers/issues/39739/events | https://github.com/huggingface/transformers/pull/39739 | 3,270,831,339 | PR_kwDOCUB6oc6hB4O1 | 39,739 | Add fast image processor Janus, Deepseek VL, Deepseek VL hybrid | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T17:48:43 | 2025-08-01T16:20:08 | 2025-08-01T16:20:08 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39739",
"html_url": "https://github.com/huggingface/transformers/pull/39739",
"diff_url": "https://github.com/huggingface/transformers/pull/39739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39739.patch",
"merged_at": "2025-08-01T16:20:08"
} | As the title says.
Cc @zucchini-nlp as I think you reviewed these models?
Also it would be great to have fast image processors on release for the newest models, don't hesitate to ping me on the PRs, happy to help! | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39739/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39739/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39738/comments | https://api.github.com/repos/huggingface/transformers/issues/39738/events | https://github.com/huggingface/transformers/pull/39738 | 3,270,778,225 | PR_kwDOCUB6oc6hBsaM | 39,738 | Standardize CLAP model card format | {
"login": "yanamis",
"id": 72974057,
"node_id": "MDQ6VXNlcjcyOTc0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/72974057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanamis",
"html_url": "https://github.com/yanamis",
"followers_url": "https://api.github.com/users/yanamis/followers",
"following_url": "https://api.github.com/users/yanamis/following{/other_user}",
"gists_url": "https://api.github.com/users/yanamis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanamis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanamis/subscriptions",
"organizations_url": "https://api.github.com/users/yanamis/orgs",
"repos_url": "https://api.github.com/users/yanamis/repos",
"events_url": "https://api.github.com/users/yanamis/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanamis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-28T17:32:16 | 2025-07-29T21:13:04 | 2025-07-29T21:13:04 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39738",
"html_url": "https://github.com/huggingface/transformers/pull/39738",
"diff_url": "https://github.com/huggingface/transformers/pull/39738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39738.patch",
"merged_at": "2025-07-29T21:13:04"
} | # What does this PR do?
This PR updates the CLAP model card to follow the standardized format as requested in https://github.com/huggingface/transformers/issues/36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39738/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.