url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/39235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39235/comments | https://api.github.com/repos/huggingface/transformers/issues/39235/events | https://github.com/huggingface/transformers/issues/39235 | 3,204,559,039 | I_kwDOCUB6oc6_AbC_ | 39,235 | Specifying multiple metrics in TrainingArguments.metric_for_best_model | {
"login": "gumran",
"id": 147415574,
"node_id": "U_kgDOCMliFg",
"avatar_url": "https://avatars.githubusercontent.com/u/147415574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gumran",
"html_url": "https://github.com/gumran",
"followers_url": "https://api.github.com/users/gumran/followers",
"following_url": "https://api.github.com/users/gumran/following{/other_user}",
"gists_url": "https://api.github.com/users/gumran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gumran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gumran/subscriptions",
"organizations_url": "https://api.github.com/users/gumran/orgs",
"repos_url": "https://api.github.com/users/gumran/repos",
"events_url": "https://api.github.com/users/gumran/events{/privacy}",
"received_events_url": "https://api.github.com/users/gumran/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-05T08:12:42 | 2025-07-05T18:35:43 | null | NONE | null | null | null | null | ### Feature request
Currently, `TrainingArguments.metric_for_best_model` only accepts a `str`, such as "loss" or "accuracy". I suggest we allow specifying multiple metrics as an iterable so that the training loop tracks each of them and saves to separate directories. This means we also need to allow `output_dir` to be an iterable whose size must be the same. Besides, we would need to adjust the logic of `greater_is_better`, `load_best_model_at_end`, and several other arguments.
### Motivation
The rationale for this change is because it is very often that we are interested in saving checkpoints based on multiple metrics. As an example, when I was recently training an LLM via `trl.DPOConfig`, which inherits from `TrainingArguments`, I wanted to save checkpoints that achieve the highest validation accuracy and validation margin, which do not necessarily coincide. Instead, I had to pick one and lose the other or save a checkpoint at every evaluation, which is problematic in terms of disk space, among other issues.
### Your contribution
If you should approve feature, then I might submit a PR. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39235/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39234/comments | https://api.github.com/repos/huggingface/transformers/issues/39234/events | https://github.com/huggingface/transformers/pull/39234 | 3,203,745,955 | PR_kwDOCUB6oc6df1Bw | 39,234 | Replace einsum with unsqueeze | {
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T21:38:02 | 2025-07-07T10:14:08 | 2025-07-07T10:14:08 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39234",
"html_url": "https://github.com/huggingface/transformers/pull/39234",
"diff_url": "https://github.com/huggingface/transformers/pull/39234.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39234.patch",
"merged_at": "2025-07-07T10:14:08"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes an issue when exporting VJEPA-2 to ONNX:
> InferenceError: [ShapeInferenceError] (op_type:Einsum, node name: /vjepa2/encoder/layer.0/attention/Einsum): Inputs has inconsistent type tensor(float)
It also makes things more readable for users who don't undertand einsum notation 😅
Code to reproduce:
```py
import torch
from transformers import AutoModelForVideoClassification
hf_repo = "facebook/vjepa2-vitl-fpc32-256-diving48"
model = AutoModelForVideoClassification.from_pretrained(hf_repo)
pixel_values_videos = torch.randn(2, 16, 3, 224, 224)
torch.onnx.export(model, # model being run
(pixel_values_videos, ), # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=18, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['pixel_values_videos'], # the model's input names
output_names = ['logits'], # the model's output names
dynamic_axes={'pixel_values_videos' : {0: 'batch_size', 1: 'num_frames', 3: 'height', 4: 'width'}, # variable length axes
'logits' : {0: 'batch_size'}},
)
import onnx
onnx.checker.check_model("model.onnx", full_check=True)
```
### Before
```
InferenceError: [ShapeInferenceError] (op_type:Einsum, node name: /vjepa2/encoder/layer.0/attention/Einsum): Inputs has inconsistent type tensor(float)
```
### After
```
no error
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@qubvel | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39234/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39233/comments | https://api.github.com/repos/huggingface/transformers/issues/39233/events | https://github.com/huggingface/transformers/pull/39233 | 3,203,711,651 | PR_kwDOCUB6oc6dftlo | 39,233 | Update LED model card | {
"login": "dross20",
"id": 73395516,
"node_id": "MDQ6VXNlcjczMzk1NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/73395516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dross20",
"html_url": "https://github.com/dross20",
"followers_url": "https://api.github.com/users/dross20/followers",
"following_url": "https://api.github.com/users/dross20/following{/other_user}",
"gists_url": "https://api.github.com/users/dross20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dross20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dross20/subscriptions",
"organizations_url": "https://api.github.com/users/dross20/orgs",
"repos_url": "https://api.github.com/users/dross20/repos",
"events_url": "https://api.github.com/users/dross20/events{/privacy}",
"received_events_url": "https://api.github.com/users/dross20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T21:13:23 | 2025-07-07T22:56:57 | 2025-07-07T22:56:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39233",
"html_url": "https://github.com/huggingface/transformers/pull/39233",
"diff_url": "https://github.com/huggingface/transformers/pull/39233.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39233.patch",
"merged_at": "2025-07-07T22:56:57"
} | # What does this PR do?
This PR replaces the LED model card with a new model card matching the format introduced in https://github.com/huggingface/transformers/issues/36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39233/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39232 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39232/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39232/comments | https://api.github.com/repos/huggingface/transformers/issues/39232/events | https://github.com/huggingface/transformers/pull/39232 | 3,203,394,454 | PR_kwDOCUB6oc6deq-8 | 39,232 | Add support for `ModernBertForMultipleChoice` | {
"login": "netique",
"id": 34926852,
"node_id": "MDQ6VXNlcjM0OTI2ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34926852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netique",
"html_url": "https://github.com/netique",
"followers_url": "https://api.github.com/users/netique/followers",
"following_url": "https://api.github.com/users/netique/following{/other_user}",
"gists_url": "https://api.github.com/users/netique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netique/subscriptions",
"organizations_url": "https://api.github.com/users/netique/orgs",
"repos_url": "https://api.github.com/users/netique/repos",
"events_url": "https://api.github.com/users/netique/events{/privacy}",
"received_events_url": "https://api.github.com/users/netique/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T17:52:03 | 2025-08-04T18:46:03 | 2025-08-04T18:45:44 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39232",
"html_url": "https://github.com/huggingface/transformers/pull/39232",
"diff_url": "https://github.com/huggingface/transformers/pull/39232.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39232.patch",
"merged_at": "2025-08-04T18:45:44"
} | # What does this PR do?
This PR implements `ModernBertForMultipleChoice` class that was missing.
## Who can review?
@ArthurZucker | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39232/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39231/comments | https://api.github.com/repos/huggingface/transformers/issues/39231/events | https://github.com/huggingface/transformers/issues/39231 | 3,203,291,446 | I_kwDOCUB6oc6-7lk2 | 39,231 | v4.53.0 - Qwen 2.5 VL Flash Attention error - object has no attribute is_causal | {
"login": "aidando73",
"id": 43259657,
"node_id": "MDQ6VXNlcjQzMjU5NjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43259657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aidando73",
"html_url": "https://github.com/aidando73",
"followers_url": "https://api.github.com/users/aidando73/followers",
"following_url": "https://api.github.com/users/aidando73/following{/other_user}",
"gists_url": "https://api.github.com/users/aidando73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aidando73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aidando73/subscriptions",
"organizations_url": "https://api.github.com/users/aidando73/orgs",
"repos_url": "https://api.github.com/users/aidando73/repos",
"events_url": "https://api.github.com/users/aidando73/events{/privacy}",
"received_events_url": "https://api.github.com/users/aidando73/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-04T16:49:24 | 2025-08-12T08:02:52 | 2025-08-12T08:02:52 | NONE | null | null | null | null | ### System Info
transformers==4.53.0
```
root@0bfa7bd36f4f:/home/aidan/home/fireworks# python -c "import torch; torch.utils.collect_env.main()"
Collecting environment information...
PyTorch version: 2.7.0a0+ecf3bae40a.nv25.02
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
BIOS Vendor ID: QEMU
Model name: Intel(R) Xeon(R) Platinum 8480+
BIOS Model name: pc-q35-8.0 CPU @ 2.0GHz
BIOS CPU family: 1
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6.5 MiB (208 instances)
L1i cache: 6.5 MiB (208 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-103
NUMA node1 CPU(s): 104-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.22.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git0d4682f0b.nvinternal
[pip3] torch==2.7.0a0+ecf3bae40a.nv25.2
[pip3] torch_geometric==2.5.3
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchaudio==2.1.0+6ea1133
[pip3] torchdata==0.11.0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[pip3] triton==3.3.1
```
### Who can help?
cc: @amyeroberts, @qubvel, @zucchini-nlp for vision models
cc: @atarashii-nwu - on our end
Repro script:
```python
import torch
import requests
from PIL import Image
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
# Load QwEN 2.5 VL model and processor
print("Loading Qwen2.5-VL-3B-Instruct...")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2.5-VL-3B-Instruct",
torch_dtype="auto",
device_map="auto",
attn_implementation="flash_attention_2",
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
# Load image from URL
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)
# Create chat messages with image and text
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{"type": "image", "image": image_url},
{"type": "text", "text": "What do you see in this image? Please describe it in detail."},
],
},
]
# Apply chat template
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Process vision info and tokenize
inputs = processor(text=[text], images=[image], padding=True, return_tensors="pt")
inputs = inputs.to(model.device)
# Generate response
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids = [output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, generated_ids)]
response = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print("Response:", response[0])
```
Returns error:
```
root@0bfa7bd36f4f:/home/aidan/home/fireworks# python repro.py
Loading Qwen2.5-VL-3B-Instruct...
You are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.17s/it]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.
self.gradient_checkpointing: False
self.training: False
Traceback (most recent call last):
File "/home/aidan/home/fireworks/repro.py", line 40, in <module>
generated_ids = model.generate(**inputs, max_new_tokens=512)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/generation/utils.py", line 2623, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/generation/utils.py", line 3604, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/accelerate/hooks.py", line 175, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1487, in forward
outputs = self.model(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1228, in forward
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1178, in get_image_features
image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 475, in forward
hidden_states = blk(
^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/modeling_layers.py", line 86, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/accelerate/hooks.py", line 175, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 291, in forward
hidden_states = hidden_states + self.attn(
^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/accelerate/hooks.py", line 175, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 254, in forward
attn_output, _ = attention_interface(
^^^^^^^^^^^^^^^^^^^^
File "/root/.local/lib/python3.12/site-packages/transformers/integrations/flash_attention.py", line 71, in flash_attention_forward
is_causal=module.is_causal,
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1928, in __getattr__
raise AttributeError(
AttributeError: 'Qwen2_5_VLVisionAttention' object has no attribute 'is_causal'
```
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
(see above)
### Expected behavior
Should return a well formed Qwen 2.5 VL response | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39231/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39231/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39230/comments | https://api.github.com/repos/huggingface/transformers/issues/39230/events | https://github.com/huggingface/transformers/pull/39230 | 3,203,290,121 | PR_kwDOCUB6oc6deVGu | 39,230 | [server] add tests and fix passing a custom `generation_config` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T16:48:25 | 2025-07-10T13:50:30 | 2025-07-10T13:41:38 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39230",
"html_url": "https://github.com/huggingface/transformers/pull/39230",
"diff_url": "https://github.com/huggingface/transformers/pull/39230.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39230.patch",
"merged_at": "2025-07-10T13:41:38"
} | # What does this PR do?
This PR:
- Adds basic usage tests for `transformers serve`, including tool use
- Fixes the `tiny-agents` demo (new exception, I think from the new huggingface_hub version?)
- Fixes the use of a custom `GenerationConfig` in `transformers serve`
- In both `transformers serve` and `transformers chat`, ensures we start building a `GenerationConfig` from the model's default, rather than the global default. This should enable better outcomes with default parameterization. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39230/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39229/comments | https://api.github.com/repos/huggingface/transformers/issues/39229/events | https://github.com/huggingface/transformers/pull/39229 | 3,203,224,607 | PR_kwDOCUB6oc6deHY4 | 39,229 | fix `fastspeech2_conformer` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T16:14:58 | 2025-07-07T13:04:28 | 2025-07-07T13:04:26 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39229",
"html_url": "https://github.com/huggingface/transformers/pull/39229",
"diff_url": "https://github.com/huggingface/transformers/pull/39229.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39229.patch",
"merged_at": "2025-07-07T13:04:26"
} | # What does this PR do?
The 2 tests are failing for long time. I also need to update one expected output values | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39229/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39228/comments | https://api.github.com/repos/huggingface/transformers/issues/39228/events | https://github.com/huggingface/transformers/pull/39228 | 3,203,220,594 | PR_kwDOCUB6oc6deGkq | 39,228 | [`Ernie 4.5`] Add ernie text models | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-04T16:12:21 | 2025-07-21T17:51:54 | 2025-07-21T17:51:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39228",
"html_url": "https://github.com/huggingface/transformers/pull/39228",
"diff_url": "https://github.com/huggingface/transformers/pull/39228.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39228.patch",
"merged_at": "2025-07-21T17:51:49"
} | Adding the Ernie 4.5 suite of models.
Progress:
- [x] Ernie 4.5 pure text model (0.3B)
- [x] MoE Ernie
- [x] Loading check with untied weights (tested on a dummy model)
- [x] TP tests
- [x] Failing with tied weights, needs to be fixed then it's done
- [x] Correction bias clarification
- Following the paddle code instead of remote - added a note so subject to change
- [x] Update configs on hub
- [ ] (MTP support in training)
- [x] Integration test ^ (needs slow runs to cross check)
- [x] Check whether the MoE also need a rotation conversion (0.3B modeling files differ to the other ones regarding RoPE)
- Yes, they do - turns out they use a similar trick as I did in 393c2c772aebeac0c6816bd2086c600254161ea5
- Adapted from GLM as they do the same RoPE style as well
- [x] Fixup tokenization
- [x] ~[Conversion](https://gist.github.com/vasqu/7828357fd24929f4d7a51202b0801c3e)~ see `convert...tokenizer`
- [x] Update on the hub
- [x] Docs (might need updates based on the tokenizer ^)
- [x] Update original hub on baidu side --> tokenizer + configs
New/Followup PR:
- [ ] MoE Ernie VL
- [ ] MoE is different (not allowing for the original MoE formula (Mixtral-based)?)
- [ ] It can have different capacities
- [ ] Different gating :eyes:
- [ ] 3D RoPE in image and text (with different RoPE formulation ~GLM style, even/odd instead of half/half)
- [ ] Miscellaneous, as in the other remote code
- [ ] Attention
- [ ] RMS norm
- [ ] Residual
- [ ] Proper padding support etc. | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39228/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39228/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39227/comments | https://api.github.com/repos/huggingface/transformers/issues/39227/events | https://github.com/huggingface/transformers/pull/39227 | 3,203,210,395 | PR_kwDOCUB6oc6deErR | 39,227 | Updated CamemBERT model card to new standardized format | {
"login": "MShaheerMalik77",
"id": 157911864,
"node_id": "U_kgDOCWmLOA",
"avatar_url": "https://avatars.githubusercontent.com/u/157911864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MShaheerMalik77",
"html_url": "https://github.com/MShaheerMalik77",
"followers_url": "https://api.github.com/users/MShaheerMalik77/followers",
"following_url": "https://api.github.com/users/MShaheerMalik77/following{/other_user}",
"gists_url": "https://api.github.com/users/MShaheerMalik77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MShaheerMalik77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MShaheerMalik77/subscriptions",
"organizations_url": "https://api.github.com/users/MShaheerMalik77/orgs",
"repos_url": "https://api.github.com/users/MShaheerMalik77/repos",
"events_url": "https://api.github.com/users/MShaheerMalik77/events{/privacy}",
"received_events_url": "https://api.github.com/users/MShaheerMalik77/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T16:08:09 | 2025-07-11T17:59:09 | 2025-07-11T17:59:09 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39227",
"html_url": "https://github.com/huggingface/transformers/pull/39227",
"diff_url": "https://github.com/huggingface/transformers/pull/39227.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39227.patch",
"merged_at": "2025-07-11T17:59:09"
} | # What does this PR do?
This PR updates the CamemBERT model card `camembert.md`, by standardizing its format according to https://github.com/huggingface/transformers/issues/36979. It includes example code for performing masked language modelling with the pipeline and AutoModel classes, as well as a Quantization example.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu please do let me know if you'd like me to make any changes! | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39227/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39227/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39226/comments | https://api.github.com/repos/huggingface/transformers/issues/39226/events | https://github.com/huggingface/transformers/pull/39226 | 3,203,083,884 | PR_kwDOCUB6oc6ddp7z | 39,226 | Fixes #39204: add fallback if get_base_model missing | {
"login": "sebastianvlad1",
"id": 30313538,
"node_id": "MDQ6VXNlcjMwMzEzNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/30313538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianvlad1",
"html_url": "https://github.com/sebastianvlad1",
"followers_url": "https://api.github.com/users/sebastianvlad1/followers",
"following_url": "https://api.github.com/users/sebastianvlad1/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastianvlad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastianvlad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastianvlad1/subscriptions",
"organizations_url": "https://api.github.com/users/sebastianvlad1/orgs",
"repos_url": "https://api.github.com/users/sebastianvlad1/repos",
"events_url": "https://api.github.com/users/sebastianvlad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastianvlad1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T15:13:09 | 2025-07-16T13:51:42 | 2025-07-16T13:51:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39226",
"html_url": "https://github.com/huggingface/transformers/pull/39226",
"diff_url": "https://github.com/huggingface/transformers/pull/39226.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39226.patch",
"merged_at": "2025-07-16T13:51:30"
} | This PR adds a fallback mechanism to extract the base model from a PEFT-wrapped model in case the get_base_model() method is missing. Specifically, it attempts to retrieve model.base_model.model if get_base_model is not available. This improves compatibility with different PEFT wrappers and custom model classes.
The logic has been extracted into a dedicated try_get_base_model utility function to improve code reuse and robustness.
Fixes #39204.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://github.com/huggingface/transformers/issues/39204)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39226/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39225/comments | https://api.github.com/repos/huggingface/transformers/issues/39225/events | https://github.com/huggingface/transformers/pull/39225 | 3,203,075,087 | PR_kwDOCUB6oc6ddoAP | 39,225 | feat: add sliding window attention to Continuous Batching | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T15:10:12 | 2025-09-11T07:31:44 | 2025-09-11T07:31:20 | MEMBER | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39225",
"html_url": "https://github.com/huggingface/transformers/pull/39225",
"diff_url": "https://github.com/huggingface/transformers/pull/39225.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39225.patch",
"merged_at": null
} | # What does this PR do?
Adds Sliding Window Attention to Continuous Batching | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39225/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39224/comments | https://api.github.com/repos/huggingface/transformers/issues/39224/events | https://github.com/huggingface/transformers/issues/39224 | 3,202,815,590 | I_kwDOCUB6oc6-5xZm | 39,224 | transformers: FlaubertTokenizer: do_lowercase_and_remove_accent: make the logger warning actionable (don't only tell what's wrong, rather suggest what could be done about that) | {
"login": "kirisakow",
"id": 11773604,
"node_id": "MDQ6VXNlcjExNzczNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/11773604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kirisakow",
"html_url": "https://github.com/kirisakow",
"followers_url": "https://api.github.com/users/kirisakow/followers",
"following_url": "https://api.github.com/users/kirisakow/following{/other_user}",
"gists_url": "https://api.github.com/users/kirisakow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kirisakow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kirisakow/subscriptions",
"organizations_url": "https://api.github.com/users/kirisakow/orgs",
"repos_url": "https://api.github.com/users/kirisakow/repos",
"events_url": "https://api.github.com/users/kirisakow/events{/privacy}",
"received_events_url": "https://api.github.com/users/kirisakow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-04T13:48:52 | 2025-10-19T09:59:12 | null | NONE | null | null | null | null | Please, make the logger warning below *actionable* (**don't only tell what's wrong, rather suggest what could be done about that**):
https://github.com/huggingface/transformers/blob/e6a8063ef1af16df964b644b07e1d17e96555d23/src/transformers/models/flaubert/tokenization_flaubert.py#L208-L209
Here's more context:
https://github.com/huggingface/transformers/blob/e6a8063ef1af16df964b644b07e1d17e96555d23/src/transformers/models/flaubert/tokenization_flaubert.py#L205-L212
The community might appreciate. Thank you HF 🤗 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39224/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39223/comments | https://api.github.com/repos/huggingface/transformers/issues/39223/events | https://github.com/huggingface/transformers/pull/39223 | 3,202,746,431 | PR_kwDOCUB6oc6dck3g | 39,223 | Add support for older versions in is_torchdynamo_compiling(). | {
"login": "Pqlet",
"id": 67025630,
"node_id": "MDQ6VXNlcjY3MDI1NjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67025630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pqlet",
"html_url": "https://github.com/Pqlet",
"followers_url": "https://api.github.com/users/Pqlet/followers",
"following_url": "https://api.github.com/users/Pqlet/following{/other_user}",
"gists_url": "https://api.github.com/users/Pqlet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pqlet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pqlet/subscriptions",
"organizations_url": "https://api.github.com/users/Pqlet/orgs",
"repos_url": "https://api.github.com/users/Pqlet/repos",
"events_url": "https://api.github.com/users/Pqlet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pqlet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T13:25:03 | 2025-07-04T14:04:07 | 2025-07-04T14:04:07 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39223",
"html_url": "https://github.com/huggingface/transformers/pull/39223",
"diff_url": "https://github.com/huggingface/transformers/pull/39223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39223.patch",
"merged_at": null
} | Support is_torchdynamo_compiling() for older versions.
# What does this PR do?
Fixes error in `is_torchdynamo_compiling()`, which is thrown for older versions of torch, e.g. torch==2.2.2. The support for `torch._dynamo.external_utils.is_compiling()` was deprecated and torch=2.2.2 does not manage torch.compile() on Llama 3.2 model.
@ArthurZucker | {
"login": "Pqlet",
"id": 67025630,
"node_id": "MDQ6VXNlcjY3MDI1NjMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67025630?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pqlet",
"html_url": "https://github.com/Pqlet",
"followers_url": "https://api.github.com/users/Pqlet/followers",
"following_url": "https://api.github.com/users/Pqlet/following{/other_user}",
"gists_url": "https://api.github.com/users/Pqlet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pqlet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pqlet/subscriptions",
"organizations_url": "https://api.github.com/users/Pqlet/orgs",
"repos_url": "https://api.github.com/users/Pqlet/repos",
"events_url": "https://api.github.com/users/Pqlet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pqlet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39223/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39222/comments | https://api.github.com/repos/huggingface/transformers/issues/39222/events | https://github.com/huggingface/transformers/pull/39222 | 3,202,534,664 | PR_kwDOCUB6oc6db2FF | 39,222 | Enable granite 4 hybrid integration tests | {
"login": "alex-jw-brooks",
"id": 10740300,
"node_id": "MDQ6VXNlcjEwNzQwMzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/10740300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alex-jw-brooks",
"html_url": "https://github.com/alex-jw-brooks",
"followers_url": "https://api.github.com/users/alex-jw-brooks/followers",
"following_url": "https://api.github.com/users/alex-jw-brooks/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-jw-brooks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alex-jw-brooks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-jw-brooks/subscriptions",
"organizations_url": "https://api.github.com/users/alex-jw-brooks/orgs",
"repos_url": "https://api.github.com/users/alex-jw-brooks/repos",
"events_url": "https://api.github.com/users/alex-jw-brooks/events{/privacy}",
"received_events_url": "https://api.github.com/users/alex-jw-brooks/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-04T12:18:30 | 2025-07-07T18:25:49 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39222",
"html_url": "https://github.com/huggingface/transformers/pull/39222",
"diff_url": "https://github.com/huggingface/transformers/pull/39222.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39222.patch",
"merged_at": null
} | Enables granite moe hybrid integration tests using the tiny preview model. Tolerance is adjusted to be more lenient for bfloat16.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # https://github.com/huggingface/transformers/issues/38542
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ydshieh can you please take a look? | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39222/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39221/comments | https://api.github.com/repos/huggingface/transformers/issues/39221/events | https://github.com/huggingface/transformers/pull/39221 | 3,202,516,185 | PR_kwDOCUB6oc6dbyBO | 39,221 | 🚨 Fix Inconsistant `input_feature` length and `attention_mask` length in `WhisperFeatureExtractor` | {
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T12:13:26 | 2025-09-10T09:38:48 | 2025-09-10T09:38:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39221",
"html_url": "https://github.com/huggingface/transformers/pull/39221",
"diff_url": "https://github.com/huggingface/transformers/pull/39221.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39221.patch",
"merged_at": "2025-09-10T09:38:48"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #39214
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@eustlb @zucchini-nlp
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39221/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39220/comments | https://api.github.com/repos/huggingface/transformers/issues/39220/events | https://github.com/huggingface/transformers/pull/39220 | 3,202,341,662 | PR_kwDOCUB6oc6dbLvR | 39,220 | Update expected values (after switching to A10) - part 8 - Final | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T11:23:12 | 2025-07-04T11:37:01 | 2025-07-04T11:35:53 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39220",
"html_url": "https://github.com/huggingface/transformers/pull/39220",
"diff_url": "https://github.com/huggingface/transformers/pull/39220.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39220.patch",
"merged_at": "2025-07-04T11:35:53"
} | # What does this PR do?
2 remaining failed tests that were missed in the previous PRs. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39220/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39219/comments | https://api.github.com/repos/huggingface/transformers/issues/39219/events | https://github.com/huggingface/transformers/issues/39219 | 3,202,323,601 | I_kwDOCUB6oc6-35SR | 39,219 | Feature Request: Native Support for Custom Multimodal Models | {
"login": "DrxcoDev2",
"id": 201119051,
"node_id": "U_kgDOC_zVSw",
"avatar_url": "https://avatars.githubusercontent.com/u/201119051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrxcoDev2",
"html_url": "https://github.com/DrxcoDev2",
"followers_url": "https://api.github.com/users/DrxcoDev2/followers",
"following_url": "https://api.github.com/users/DrxcoDev2/following{/other_user}",
"gists_url": "https://api.github.com/users/DrxcoDev2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrxcoDev2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrxcoDev2/subscriptions",
"organizations_url": "https://api.github.com/users/DrxcoDev2/orgs",
"repos_url": "https://api.github.com/users/DrxcoDev2/repos",
"events_url": "https://api.github.com/users/DrxcoDev2/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrxcoDev2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-04T11:17:00 | 2025-07-04T11:17:00 | null | NONE | null | null | null | null | ### Feature request
I'm currently working on a research project that involves combining textual and audio data in a custom multimodal architecture. While the library already supports several powerful pretrained multimodal models like CLIP and Flamingo, building new custom multimodal models from scratch is still quite manual and repetitive.
I’d love to propose a general framework for defining and training custom multimodal models natively within transformers. I believe this could benefit many researchers and developers looking to explore new combinations of modalities.
### Motivation
While working on a project that involves combining textual and audio data, I found it quite cumbersome to build a custom multimodal model using the current transformers library. Although the library supports impressive multimodal architectures like CLIP, Flamingo, and VisionEncoderDecoderModel, these are tied to specific use cases and pretrained models.
I'm often frustrated by the lack of a general, modular interface that allows me to:
Seamlessly combine different pretrained encoders (e.g., BERT + Wav2Vec2).
Handle multimodal inputs using a unified processor.
Train and fine-tune these models using the Trainer API without custom boilerplate code.
This lack of flexibility makes experimenting with new multimodal architectures more difficult and discourages rapid prototyping.
By introducing a native multimodal model base class and processor integration, the transformers library could better support custom research and production use cases involving mixed modalities.
### Your contribution
Yes — I’d be happy to contribute to this feature.
If the proposed idea aligns with the maintainers’ vision, I’m willing to:
Collaborate on the design of the API and architecture.
Open a Pull Request implementing a minimal working version of the MultiModalModel base class and example processors.
Write basic documentation and provide a working notebook or demo. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39219/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39218/comments | https://api.github.com/repos/huggingface/transformers/issues/39218/events | https://github.com/huggingface/transformers/pull/39218 | 3,202,198,690 | PR_kwDOCUB6oc6dask6 | 39,218 | Update expected values (after switching to A10) - part 7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T10:39:47 | 2025-07-04T10:52:29 | 2025-07-04T10:48:10 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39218",
"html_url": "https://github.com/huggingface/transformers/pull/39218",
"diff_url": "https://github.com/huggingface/transformers/pull/39218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39218.patch",
"merged_at": "2025-07-04T10:48:10"
} | # What does this PR do?
The final one 🏅
As discussed offline, will merge to move fast as it's only expected outputs updated for A10 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39218/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39217/comments | https://api.github.com/repos/huggingface/transformers/issues/39217/events | https://github.com/huggingface/transformers/issues/39217 | 3,201,849,745 | I_kwDOCUB6oc6-2FmR | 39,217 | torch fake_tensor load hf model failed | {
"login": "SandyWang85",
"id": 191958518,
"node_id": "U_kgDOC3EN9g",
"avatar_url": "https://avatars.githubusercontent.com/u/191958518?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SandyWang85",
"html_url": "https://github.com/SandyWang85",
"followers_url": "https://api.github.com/users/SandyWang85/followers",
"following_url": "https://api.github.com/users/SandyWang85/following{/other_user}",
"gists_url": "https://api.github.com/users/SandyWang85/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SandyWang85/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SandyWang85/subscriptions",
"organizations_url": "https://api.github.com/users/SandyWang85/orgs",
"repos_url": "https://api.github.com/users/SandyWang85/repos",
"events_url": "https://api.github.com/users/SandyWang85/events{/privacy}",
"received_events_url": "https://api.github.com/users/SandyWang85/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T08:47:32 | 2025-08-12T08:02:56 | 2025-08-12T08:02:56 | NONE | null | null | null | null | The error type is "TypeError: FakeTensor.__new__() got an unexpected keyword argument 'fake_device'"
The code:
import torch
from torch import nn
from torch._subclasses import FakeTensorMode
from typing import Optional, Callable, Dict, Any
from transformers import AutoModelForCausalLM, AutoTokenizer
class BaseModelLoader:
def __init__(
self,
model_provider: Optional[Callable],
model_kwargs: Dict[str, Any] = {},
fake_device_str: str = "cpu"
) -> None:
self.model_provider = model_provider
self.model_kwargs = model_kwargs
self.device_str = fake_device_str
def load(self) -> Optional[nn.Module]:
raise NotImplementedError("")
class ShadowModelLoader(BaseModelLoader):
def __init__(
self,
model_provider: Optional[Callable],
model_kwargs: Dict[str, Any] = {},
fake_device_str: str = "cpu"
) -> None:
super().__init__(model_provider, model_kwargs, fake_device_str)
torch.__future__.set_swap_module_params_on_conversion(False)
def load(self) -> Optional[nn.Module]:
with FakeTensorMode() as fake_mode:
model = self.model_provider(**self.model_kwargs) if self.model_provider else None
if model is None:
return None
self._configure_fake_tensors(model, self.device_str)
return model
def _configure_fake_tensors(self, model: nn.Module, device_str: str):
device = torch.device(device_str)
for name, param in model.named_parameters():
if hasattr(param, 'fake_device'):
param.fake_device = device
def get_huggingface_model(model_name: str) -> nn.Module:
return AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float32,
low_cpu_mem_usage=True
)
if __name__ == "__main__":
model_config = {
"model_name": "gpt2"
}
loader = ShadowModelLoader(
model_provider=get_huggingface_model,
model_kwargs=model_config,
fake_device_str="cpu"
)
model = loader.load() | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39217/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39216/comments | https://api.github.com/repos/huggingface/transformers/issues/39216/events | https://github.com/huggingface/transformers/pull/39216 | 3,201,837,479 | PR_kwDOCUB6oc6dZdJx | 39,216 | Fix patch helper | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-04T08:43:03 | 2025-07-07T13:11:49 | 2025-07-07T13:11:48 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39216",
"html_url": "https://github.com/huggingface/transformers/pull/39216",
"diff_url": "https://github.com/huggingface/transformers/pull/39216.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39216.patch",
"merged_at": "2025-07-07T13:11:48"
} | # What does this PR do?
Since it's for a patch there should not be a -1 | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39216/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39215/comments | https://api.github.com/repos/huggingface/transformers/issues/39215/events | https://github.com/huggingface/transformers/issues/39215 | 3,201,255,977 | I_kwDOCUB6oc6-z0op | 39,215 | _load_rng_state after get_batch_samples may break training reproducibility when dataloader has random operations | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/rangehow/followers",
"following_url": "https://api.github.com/users/rangehow/following{/other_user}",
"gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rangehow/subscriptions",
"organizations_url": "https://api.github.com/users/rangehow/orgs",
"repos_url": "https://api.github.com/users/rangehow/repos",
"events_url": "https://api.github.com/users/rangehow/events{/privacy}",
"received_events_url": "https://api.github.com/users/rangehow/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-04T04:14:46 | 2025-08-25T08:03:19 | 2025-08-25T08:03:19 | CONTRIBUTOR | null | null | null | null |
### Reproduction
The current implementation in the `Trainer`'s `_inner_training_loop` for resuming from a checkpoint calls `_load_rng_state` *after* fetching the data batch with `get_batch_samples`. This logic appears to be designed to handle the complexities of `skip_first_batches` and multi-worker dataloading.
However, this order can break true reproducibility if the data loading process itself involves random operations (e.g., in-batch transformations, random samplers, or datasets with random augmentations in `__getitem__`).
When `get_batch_samples` is called before `_load_rng_state`, the random operations within the dataloader consume the RNG state from the *current* execution stream, not the *restored* one. This leads to two issues:
1. The data batch fetched is different from the one in the original, uninterrupted run.
2. The subsequent `_load_rng_state` call resets the RNG, but the `training_step` (e.g., with Dropout) then operates on this incorrect data, and uses an RNG state that is out of sync with the original run's state progression.
Conversely, if `_load_rng_state` is called *before* `get_batch_samples`, the entire sequence of random events (data loading + model training) can be perfectly reproduced, as demonstrated by the experiment below.
This suggests a potential conflict between the current implementation's robustness for `skip_first_batches` and its ability to ensure bit-for-bit reproducibility in all data loading scenarios.
#### Reproducible code
The following script sets up a controlled experiment to demonstrate the issue. It simulates a training run that is interrupted and then resumed. It compares two hypotheses for the recovery order:
* **Hypothesis A:** `load_rng_state()` -> `get_batch_samples()`
* **Hypothesis B:** `get_batch_samples()` -> `load_rng_state()` (This mimics the current `Trainer` logic)
The experiment clearly shows that only Hypothesis A successfully reproduces the "golden standard" output from the uninterrupted run.
```python
import torch
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader, Dataset
import os
import random
import numpy as np
import shutil
# --- Core Components: Setup for the experiment ---
def set_seed(seed):
"""Set all random seeds for reproducibility."""
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def save_rng_state(path):
"""Saves the complete RNG state."""
os.makedirs(path, exist_ok=True)
states = {
'torch_rng_state': torch.get_rng_state(),
'cuda_rng_state': torch.cuda.get_rng_state() if torch.cuda.is_available() else None,
'numpy_rng_state': np.random.get_state(),
'python_rng_state': random.getstate(),
}
torch.save(states, os.path.join(path, 'rng_state.pth'))
def load_rng_state(path):
"""Loads the complete RNG state."""
states = torch.load(os.path.join(path, 'rng_state.pth'), weights_only=False)
torch.set_rng_state(states['torch_rng_state'])
if torch.cuda.is_available() and states['cuda_rng_state']:
torch.cuda.set_rng_state(states['cuda_rng_state'])
np.random.set_state(states['numpy_rng_state'])
random.setstate(states['python_rng_state'])
class RandomTransformDataset(Dataset):
"""A dataset that applies a random transform, making data loading a random process."""
def __init__(self, underlying_dataset):
self.underlying_dataset = underlying_dataset
def __len__(self):
return len(self.underlying_dataset)
def __getitem__(self, idx):
data, label = self.underlying_dataset[idx]
# This random operation makes the data loading process itself non-deterministic
# without proper RNG state management.
noise = torch.rand(data.shape) * 0.001
return data + noise, label
class SimpleModel(nn.Module):
"""A simple model with Dropout, making the training step a random process."""
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 10)
self.dropout = nn.Dropout(0.5)
def forward(self, x):
return self.dropout(self.linear(x))
# --- Experiment Execution ---
# Experiment parameters
SEED = 42
CKPT_DIR = "./experiment_ckpt"
INTERRUPT_STEP = 3
STEP_TO_VERIFY = INTERRUPT_STEP + 1
# Prepare the base dataset
base_dataset = TensorDataset(torch.randn(100, 10), torch.randn(100, 10))
print("="*60)
print(" REPRODUCIBILITY EXPERIMENT ")
print("="*60)
print(f"Goal: Determine the correct order of `load_rng_state` and `get_batch_samples`")
print(f" to reproduce training from a checkpoint at step {INTERRUPT_STEP}.")
print(f"Verification will happen at step {STEP_TO_VERIFY}.\n")
# 1. CONTROL GROUP: The uninterrupted run
print("--- [1] Running the Control Group (Golden Standard) ---")
set_seed(SEED)
model = SimpleModel()
model.train()
control_dataloader = DataLoader(RandomTransformDataset(base_dataset), batch_size=10, shuffle=True, num_workers=0)
control_iterator = iter(control_dataloader)
golden_output = None
for step in range(STEP_TO_VERIFY):
inputs, _ = next(control_iterator)
output = model(inputs)
if step + 1 == INTERRUPT_STEP:
print(f"Step {step+1}: Saving checkpoint...")
save_rng_state(CKPT_DIR)
if step + 1 == STEP_TO_VERIFY:
golden_output = output.detach().clone()
print(f"Step {step+1}: Storing golden output. Sum = {golden_output.sum().item():.6f}")
print("\n--- [2] Running Experimental Groups ---")
# Function to run one experimental hypothesis
def run_experiment(hypothesis_name, restore_order):
print(f"\n--- Testing Hypothesis {hypothesis_name} ---")
print(f"Restore order: {restore_order}")
set_seed(SEED)
model_exp = SimpleModel()
model_exp.train()
exp_dataloader = DataLoader(RandomTransformDataset(base_dataset), batch_size=10, shuffle=True, num_workers=0)
exp_iterator = iter(exp_dataloader)
# Skip to the point of interruption
for _ in range(INTERRUPT_STEP):
next(exp_iterator)
# Apply the hypothesis's restore order
if restore_order == "load_first":
print("Action: Loading RNG state...")
load_rng_state(CKPT_DIR)
print("Action: Getting batch samples...")
inputs, _ = next(exp_iterator)
elif restore_order == "get_first":
print("Action: Getting batch samples...")
inputs, _ = next(exp_iterator)
print("Action: Loading RNG state...")
load_rng_state(CKPT_DIR)
else:
raise ValueError("Unknown restore order")
# Perform the training step
print("Action: Performing training step...")
output_exp = model_exp(inputs)
print(f"Result: Output sum = {output_exp.sum().item():.6f}")
return output_exp.detach().clone()
# Hypothesis A: load -> get
output_A = run_experiment("A: load_first", "load_first")
# Hypothesis B: get -> load (Mimics Trainer)
output_B = run_experiment("B: get_first", "get_first")
# 4. ANALYSIS & CONCLUSION
print("\n\n" + "="*60)
print(" ANALYSIS AND CONCLUSION ")
print("="*60)
print(f"Golden Standard Output Sum (from Control Group): {golden_output.sum().item():.6f}\n")
print(f"Hypothesis A ('load_first') Output Sum: {output_A.sum().item():.6f}")
match_A = torch.allclose(golden_output, output_A)
print(f"Does it match the Golden Standard? -> {match_A}")
print(f"\nHypothesis B ('get_first') Output Sum: {output_B.sum().item():.6f}")
match_B = torch.allclose(golden_output, output_B)
print(f"Does it match the Golden Standard? -> {match_B}\n")
print("--- Conclusion based on experimental data ---")
if match_A and not match_B:
print("✅ The experiment confirms that Hypothesis A ('load_first') is the correct approach for full reproducibility.")
elif not match_A and match_B:
print("✅ The experiment confirms that Hypothesis B ('get_first') is correct.")
else:
print("❓ The experiment is inconclusive or failed. Please check the setup.")
# Cleanup
shutil.rmtree(CKPT_DIR)
```
#### Expected behavior
To achieve perfect reproducibility, Hypothesis A (`load_rng_state` before `get_batch_samples`) should be the correct approach, as it restores the RNG state before any random operations for the resumed step occur.
The current implementation in the `Trainer` (mimicked by Hypothesis B) fails to reproduce the original run in this scenario.
I understand there are complexities, especially with `skip_first_batches` potentially consuming the RNG state if loaded too early. This issue is intended to highlight this trade-off and start a discussion on whether a more robust solution for perfect reproducibility can be found.
Thank you for your time and for maintaining this incredible library.
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39215/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39214/comments | https://api.github.com/repos/huggingface/transformers/issues/39214/events | https://github.com/huggingface/transformers/issues/39214 | 3,201,195,846 | I_kwDOCUB6oc6-zl9G | 39,214 | Inconsistant `input_feature` length and `attention_mask` length in `WhisperFeatureExtractor` | {
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 7377881103,
"node_id": "LA_kwDOCUB6oc8AAAABt8GIDw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Whisper",
"name": "Whisper",
"color": "83303E",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-04T03:37:34 | 2025-08-11T08:02:57 | 2025-08-11T08:02:57 | CONTRIBUTOR | null | null | null | null | ### System Info
transformers `main` branch
### Who can help?
@eustlb @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoProcessor
import numpy as np
audios=[np.random.randn(16000*5)]
processor=AutoProcessor.from_pretrained("openai/whisper-large-v3")
print(processor(
[audios[0][: 160 * 5 - 1]], return_attention_mask=True, sampling_rate=16000, padding=False
)["attention_mask"].shape)
print(processor(
[audios[0][: 160 * 5]], return_attention_mask=True, sampling_rate=16000, padding=False
)["attention_mask"].shape)
print(processor(
[audios[0][: 160 * 5 + 1]], return_attention_mask=True, sampling_rate=16000, padding=False
)["attention_mask"].shape)
```
### Expected behavior
The `input_feature` and `attention_mask` length should be `audio_length // hop_length`, but the code here:
https://github.com/huggingface/transformers/blob/e8e0c76162263840661fc0ca0da3952861754759/src/transformers/models/whisper/feature_extraction_whisper.py#L327-L329
makes the `attention_mask` length equal to `(audio_length+hop_length-1) // hop_length`, it should be changed to:
```diff
diff --git a/src/transformers/models/whisper/feature_extraction_whisper.py b/src/transformers/models/whisper/feature_extraction_whisper.py
index 68c52c6eb3..b9f3b4cb35 100644
--- a/src/transformers/models/whisper/feature_extraction_whisper.py
+++ b/src/transformers/models/whisper/feature_extraction_whisper.py
@@ -326,7 +326,9 @@ class WhisperFeatureExtractor(SequenceFeatureExtractor):
if return_attention_mask:
# rescale from sample (48000) to feature (3000)
- padded_inputs["attention_mask"] = padded_inputs["attention_mask"][:, :: self.hop_length]
+ padded_inputs["attention_mask"] = padded_inputs["attention_mask"][
+ :, self.hop_length - 1 :: self.hop_length
+ ]
``` | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39214/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39213/comments | https://api.github.com/repos/huggingface/transformers/issues/39213/events | https://github.com/huggingface/transformers/issues/39213 | 3,201,033,378 | I_kwDOCUB6oc6-y-Si | 39,213 | Remove device to host sync triggered in _flash_attention_forward | {
"login": "piyifan123",
"id": 72015789,
"node_id": "MDQ6VXNlcjcyMDE1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/72015789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/piyifan123",
"html_url": "https://github.com/piyifan123",
"followers_url": "https://api.github.com/users/piyifan123/followers",
"following_url": "https://api.github.com/users/piyifan123/following{/other_user}",
"gists_url": "https://api.github.com/users/piyifan123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/piyifan123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piyifan123/subscriptions",
"organizations_url": "https://api.github.com/users/piyifan123/orgs",
"repos_url": "https://api.github.com/users/piyifan123/repos",
"events_url": "https://api.github.com/users/piyifan123/events{/privacy}",
"received_events_url": "https://api.github.com/users/piyifan123/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-04T01:34:49 | 2025-07-04T01:34:49 | null | NONE | null | null | null | null | ### Feature request
# Problem
In https://github.com/huggingface/transformers/blob/037755ed54208eefa77673b0af2a0b13e51f2fb1/src/transformers/modeling_flash_attention_utils.py#L521, the condition check `(torch.diff(position_ids, dim=-1) >= 0).all())` would cause the result from the device tensor `position_ids` to be synced to the host side.
During inference/training, it can cause serious performance degradation due to CPU blocking. see the following for an example:
<img width="2476" alt="Image" src="https://github.com/user-attachments/assets/e5c1adf1-940e-407a-b78d-9875d6ce9513" />
# Proposal
Precompute the result (torch.diff(position_ids, dim=-1) >= 0).all()) and store it in the `FlashAttentionKwargs` so that we don't have to perform this device to host sync in every attention call in every layer.
The only question is whether there exists a model that cannot precompute this, i.e., the position_ids seq changes during the forward process for the same batch? Based on the fact that we are caching cu_seqlens in FlashAttentionKwargs anyway (which is equivalent to positions_ids), we can assume that?
### Motivation
As stated above, this could severely degrade the out of box performance of transformers and it's usually hard for normal user to notice. And the fix would follow an existing mechanism, i.e., `FlashAttentionKwargs` approach to avoid recomputation for FA needed kwargs.
### Your contribution
Can prepare a PR if the team thinks the proposed approach is OK. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39213/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39212/comments | https://api.github.com/repos/huggingface/transformers/issues/39212/events | https://github.com/huggingface/transformers/pull/39212 | 3,200,700,372 | PR_kwDOCUB6oc6dVrkq | 39,212 | Add Ukrainian translation of README.md | {
"login": "VXXXO",
"id": 97465930,
"node_id": "U_kgDOBc82Sg",
"avatar_url": "https://avatars.githubusercontent.com/u/97465930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VXXXO",
"html_url": "https://github.com/VXXXO",
"followers_url": "https://api.github.com/users/VXXXO/followers",
"following_url": "https://api.github.com/users/VXXXO/following{/other_user}",
"gists_url": "https://api.github.com/users/VXXXO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VXXXO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VXXXO/subscriptions",
"organizations_url": "https://api.github.com/users/VXXXO/orgs",
"repos_url": "https://api.github.com/users/VXXXO/repos",
"events_url": "https://api.github.com/users/VXXXO/events{/privacy}",
"received_events_url": "https://api.github.com/users/VXXXO/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-03T21:29:05 | 2025-07-07T18:27:03 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39212",
"html_url": "https://github.com/huggingface/transformers/pull/39212",
"diff_url": "https://github.com/huggingface/transformers/pull/39212.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39212.patch",
"merged_at": null
} | # What does this PR do?
Adds Ukrainian translation of the main README.md file to improve accessibility for Ukrainian-speaking developers and researchers.
## Changes made
- Created `i18n/README_uk.md` with a complete Ukrainian translation of the main README.md
- Added link to Ukrainian version in the main README.md language navigation
- Translation includes all sections: installation, quickstart, examples, model showcases, etc.
- Adapted code examples and explanations for Ukrainian language context
- Maintained all original links, images, and technical references
## Motivation
This change improves the accessibility of the Transformers library for the Ukrainian developer community. With over 40 million Ukrainian speakers worldwide, this translation will help more developers and researchers easily understand and use the library in their native language.
The Ukrainian translation follows the same high-quality standards as other language versions in the repository, providing a comprehensive and accurate translation that maintains the technical accuracy of the original documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu - Documentation | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39212/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39211/comments | https://api.github.com/repos/huggingface/transformers/issues/39211/events | https://github.com/huggingface/transformers/pull/39211 | 3,200,639,199 | PR_kwDOCUB6oc6dVen5 | 39,211 | Add mobilenet_v5 stub implementation to fix "Unknown Model" error | {
"login": "VXXXO",
"id": 97465930,
"node_id": "U_kgDOBc82Sg",
"avatar_url": "https://avatars.githubusercontent.com/u/97465930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VXXXO",
"html_url": "https://github.com/VXXXO",
"followers_url": "https://api.github.com/users/VXXXO/followers",
"following_url": "https://api.github.com/users/VXXXO/following{/other_user}",
"gists_url": "https://api.github.com/users/VXXXO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VXXXO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VXXXO/subscriptions",
"organizations_url": "https://api.github.com/users/VXXXO/orgs",
"repos_url": "https://api.github.com/users/VXXXO/repos",
"events_url": "https://api.github.com/users/VXXXO/events{/privacy}",
"received_events_url": "https://api.github.com/users/VXXXO/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-03T21:00:27 | 2025-07-07T14:16:41 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39211",
"html_url": "https://github.com/huggingface/transformers/pull/39211",
"diff_url": "https://github.com/huggingface/transformers/pull/39211.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39211.patch",
"merged_at": null
} | # What does this PR do?
This PR addresses issue #39208: "Unknown Model (mobilenetv5_300m_enc) when loading Gemma 3n".
## Problem
When loading Gemma 3n models, the default vision architecture is set to "mobilenetv5_300m_enc", but this architecture was not implemented in Transformers, causing an "Unknown Model" error that prevents users from using Gemma 3n.
## Solution
- Added minimal mobilenet_v5 implementation with proper structure:
- `MobileNetV5Config` - configuration class with standard parameters
- `MobileNetV5Model` - stub model implementation (inherits from PreTrainedModel)
- `MobileNetV5ImageProcessor` - stub image processor
- Registered mobilenet_v5 in all auto classes (AutoConfig, AutoModel, AutoImageProcessor)
- Used lazy loading to avoid circular dependencies
- Added proper docstrings and warnings about stub nature
## Benefits
- Eliminates "Unknown Model" error when loading Gemma 3n
- Provides foundation for future full implementation of mobilenet_v5
- Maintains backward compatibility
- Follows Transformers architecture standards
## Testing
- Verified that AutoConfig.for_model('mobilenet_v5') works
- Verified that AutoModel.from_config works for mobilenet_v5
- Verified that Gemma3nVisionConfig with architecture='mobilenetv5_300m_enc' works
- All components compile without syntax errors
**Note:** This is a stub implementation that prevents crashes. Full implementation of mobilenet_v5 architecture is left for future contributions.
Fixes #39208 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39211/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39210/comments | https://api.github.com/repos/huggingface/transformers/issues/39210/events | https://github.com/huggingface/transformers/pull/39210 | 3,200,634,224 | PR_kwDOCUB6oc6dVdgS | 39,210 | Update T5gemma | {
"login": "bzhangGo",
"id": 17406686,
"node_id": "MDQ6VXNlcjE3NDA2Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/17406686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzhangGo",
"html_url": "https://github.com/bzhangGo",
"followers_url": "https://api.github.com/users/bzhangGo/followers",
"following_url": "https://api.github.com/users/bzhangGo/following{/other_user}",
"gists_url": "https://api.github.com/users/bzhangGo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzhangGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzhangGo/subscriptions",
"organizations_url": "https://api.github.com/users/bzhangGo/orgs",
"repos_url": "https://api.github.com/users/bzhangGo/repos",
"events_url": "https://api.github.com/users/bzhangGo/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzhangGo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T20:58:42 | 2025-07-12T03:22:03 | 2025-07-08T17:08:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39210",
"html_url": "https://github.com/huggingface/transformers/pull/39210",
"diff_url": "https://github.com/huggingface/transformers/pull/39210.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39210.patch",
"merged_at": "2025-07-08T17:08:48"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
1. add vocab_size to T5GemmaConfig to fix pipeline generation.
2. replace t5gemma-placeholder with real checkpoints
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39210/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39209/comments | https://api.github.com/repos/huggingface/transformers/issues/39209/events | https://github.com/huggingface/transformers/pull/39209 | 3,200,586,575 | PR_kwDOCUB6oc6dVS2x | 39,209 | Standardize FSMT class naming: PretrainedFSMTModel → PreTrainedFSMTModel | {
"login": "VXXXO",
"id": 97465930,
"node_id": "U_kgDOBc82Sg",
"avatar_url": "https://avatars.githubusercontent.com/u/97465930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VXXXO",
"html_url": "https://github.com/VXXXO",
"followers_url": "https://api.github.com/users/VXXXO/followers",
"following_url": "https://api.github.com/users/VXXXO/following{/other_user}",
"gists_url": "https://api.github.com/users/VXXXO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VXXXO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VXXXO/subscriptions",
"organizations_url": "https://api.github.com/users/VXXXO/orgs",
"repos_url": "https://api.github.com/users/VXXXO/repos",
"events_url": "https://api.github.com/users/VXXXO/events{/privacy}",
"received_events_url": "https://api.github.com/users/VXXXO/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-03T20:40:29 | 2025-07-07T13:12:27 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39209",
"html_url": "https://github.com/huggingface/transformers/pull/39209",
"diff_url": "https://github.com/huggingface/transformers/pull/39209.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39209.patch",
"merged_at": null
} | This PR addresses the naming inconsistency for the FSMT base model class, as described in issue #39202.
### Changes
Renamed PretrainedFSMTModel to PreTrainedFSMTModel to match the naming convention used across the library (e.g., PreTrainedModel, PreTrainedTokenizer)
Updated all usages and exports accordingly
Added a comment to the class definition for clarity
### Testing
✅ All imports work correctly
✅ Python syntax is valid
✅ No breaking changes to functionality
✅ All FSMT classes import successfully
This change improves code consistency and readability without affecting functionality.
Fixes: #39202 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39209/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39208 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39208/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39208/comments | https://api.github.com/repos/huggingface/transformers/issues/39208/events | https://github.com/huggingface/transformers/issues/39208 | 3,200,522,879 | I_kwDOCUB6oc6-xBp_ | 39,208 | Unknown Model (mobilenetv5_300m_enc) when loading Gemma 3n | {
"login": "HenryNdubuaku",
"id": 26547576,
"node_id": "MDQ6VXNlcjI2NTQ3NTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26547576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HenryNdubuaku",
"html_url": "https://github.com/HenryNdubuaku",
"followers_url": "https://api.github.com/users/HenryNdubuaku/followers",
"following_url": "https://api.github.com/users/HenryNdubuaku/following{/other_user}",
"gists_url": "https://api.github.com/users/HenryNdubuaku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HenryNdubuaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HenryNdubuaku/subscriptions",
"organizations_url": "https://api.github.com/users/HenryNdubuaku/orgs",
"repos_url": "https://api.github.com/users/HenryNdubuaku/repos",
"events_url": "https://api.github.com/users/HenryNdubuaku/events{/privacy}",
"received_events_url": "https://api.github.com/users/HenryNdubuaku/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-03T20:10:57 | 2025-09-18T13:09:09 | 2025-09-18T13:09:09 | NONE | null | null | null | null | ### System Info
```bash
- `transformers` version: 4.53.0
- Platform: Linux-6.1.141+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.1
- Accelerate version: 1.2.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.5.1+cu121 (NA)
- Tensorflow version (GPU?): 2.17.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.10.2 (cpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Using distributed or parallel set-up in script?: <fill in>
```
Does Gemma 3n require special setups? That is not sustainable.
### Who can help?
I upgraded to the latest transformers to try Gemma 3n and it would seem there is not implementation of mobilenetv5_300m when i try to run the model as described on the official huggingface page.
```bash
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-2-ac178faa1642>](https://localhost:8080/#) in <cell line: 8>()
6 model_id = "google/gemma-3n-e4b-it"
7
----> 8 model = Gemma3nForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,).eval()
9
10 processor = AutoProcessor.from_pretrained(model_id)
8 frames
[/usr/local/lib/python3.10/dist-packages/timm/models/_factory.py](https://localhost:8080/#) in create_model(model_name, pretrained, pretrained_cfg, pretrained_cfg_overlay, checkpoint_path, scriptable, exportable, no_jit, **kwargs)
111
112 if not is_model(model_name):
--> 113 raise RuntimeError('Unknown model (%s)' % model_name)
114
115 create_fn = model_entrypoint(model_name)
RuntimeError: Unknown model (mobilenetv5_300m_enc)
```
Does Gemma 3n require special setups? That is not sustainable.
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The example code on the model card
```python
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "google/gemma-3n-e4b-it"
model = Gemma3nForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "text", "text": "Describe this image in detail."}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Expected behavior
This is the basic example on the model card. | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39208/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39208/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39207 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39207/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39207/comments | https://api.github.com/repos/huggingface/transformers/issues/39207/events | https://github.com/huggingface/transformers/pull/39207 | 3,200,520,688 | PR_kwDOCUB6oc6dVEZD | 39,207 | Update expected values (after switching to A10) - part 6 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T20:09:49 | 2025-07-03T20:45:32 | 2025-07-03T20:45:30 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39207",
"html_url": "https://github.com/huggingface/transformers/pull/39207",
"diff_url": "https://github.com/huggingface/transformers/pull/39207.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39207.patch",
"merged_at": "2025-07-03T20:45:30"
} | # What does this PR do?
As discussed offline, will merge to move fast as it's only expected outputs updated for A10
🚀 🚀 🚀 Almost! | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39207/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39206 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39206/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39206/comments | https://api.github.com/repos/huggingface/transformers/issues/39206/events | https://github.com/huggingface/transformers/pull/39206 | 3,200,180,456 | PR_kwDOCUB6oc6dT8_e | 39,206 | fix: filter None router logits in Qwen3 MoE and handle empty router logits (#39203) | {
"login": "SwiftAkira",
"id": 175894017,
"node_id": "U_kgDOCnvuAQ",
"avatar_url": "https://avatars.githubusercontent.com/u/175894017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SwiftAkira",
"html_url": "https://github.com/SwiftAkira",
"followers_url": "https://api.github.com/users/SwiftAkira/followers",
"following_url": "https://api.github.com/users/SwiftAkira/following{/other_user}",
"gists_url": "https://api.github.com/users/SwiftAkira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SwiftAkira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SwiftAkira/subscriptions",
"organizations_url": "https://api.github.com/users/SwiftAkira/orgs",
"repos_url": "https://api.github.com/users/SwiftAkira/repos",
"events_url": "https://api.github.com/users/SwiftAkira/events{/privacy}",
"received_events_url": "https://api.github.com/users/SwiftAkira/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-03T17:53:01 | 2025-07-21T11:15:27 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39206",
"html_url": "https://github.com/huggingface/transformers/pull/39206",
"diff_url": "https://github.com/huggingface/transformers/pull/39206.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39206.patch",
"merged_at": null
} | ## What does this PR do?
This PR fixes issue #39203 where Qwen3 MoE models crash when mlp_only_layers is non-empty and output_router_logits=True. The issue occurs because MLP-only layers return None router logits, which are incorrectly collected and passed to load_balancing_loss_func, causing a TypeError.
## Root Cause Analysis
The problem was in the router logits collection logic in Qwen3MoeModel.forward(). Unlike Qwen2 MoE which properly filters None values, Qwen3 MoE was collecting all layer outputs without null checks:
- MLP-only layers (specified in mlp_only_layers) return None for router logits since they don't use expert routing
- The original code collected these None values into the router_logits tuple
- When load_balancing_loss_func processes this tuple, it fails on None entries
## Solution
This PR implements two complementary fixes:
1. **Router logits null check**: Added proper filtering during collection to match Qwen2 MoE pattern:
```python
# Before (broken):
if output_router_logits:
all_router_logits += (layer_outputs[-1],)
# After (fixed):
if output_router_logits and layer_outputs[-1] is not None:
all_router_logits += (layer_outputs[-1],)
```
2. **Empty tuple handling**: Added a custom load_balancing_loss_func that gracefully handles the edge case where all layers are MLP-only (resulting in an empty router_logits tuple):
```python
if len(gate_logits) == 0:
return 0
```
## Implementation Details
All changes were made in the modular architecture:
- **Source file**: src/transformers/models/qwen3_moe/modular_qwen3_moe.py (hand-edited)
- **Generated file**: src/transformers/models/qwen3_moe/modeling_qwen3_moe.py (auto-generated)
The fix follows the established pattern from Qwen2 MoE, ensuring consistency across the codebase.
## Testing
Comprehensive testing was performed with various configurations:
1. **Mixed configuration** (mlp_only_layers=[1,3]):
- Correctly collects 2 router logits from MoE layers
- Successfully computes auxiliary loss
2. **All MoE configuration** (mlp_only_layers=[]):
- Collects router logits from all layers
- Standard auxiliary loss computation
3. **All MLP configuration** (mlp_only_layers=[0,1,2,3]):
- Results in empty router logits tuple
- Auxiliary loss returns 0 (no routing needed)
All test cases pass without errors.
## Backward Compatibility
This fix is fully backward compatible:
- Existing models continue to work unchanged
- Only adds null checks with minimal performance overhead
- Maintains the same API and behavior for valid configurations
## Fixes
Closes #39203
## How was this patch tested?
- Manual testing with Qwen3 MoE models using different mlp_only_layers configurations
- Verified proper router logits collection and auxiliary loss computation
- Tested edge cases including all-MLP and all-MoE scenarios
- Validated that no None values appear in the final router_logits tuple
cc @ArthurZucker @ntenenz | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39206/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39205 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39205/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39205/comments | https://api.github.com/repos/huggingface/transformers/issues/39205/events | https://github.com/huggingface/transformers/pull/39205 | 3,200,168,330 | PR_kwDOCUB6oc6dT6TF | 39,205 | Update expected values (after switching to A10) - part 5 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T17:48:41 | 2025-07-03T18:02:06 | 2025-07-03T17:56:02 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39205",
"html_url": "https://github.com/huggingface/transformers/pull/39205",
"diff_url": "https://github.com/huggingface/transformers/pull/39205.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39205.patch",
"merged_at": "2025-07-03T17:56:02"
} | # What does this PR do?
As discussed offline, will merge to move fast as it's only expected outputs updated for A10
We are close! | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39205/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39204 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39204/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39204/comments | https://api.github.com/repos/huggingface/transformers/issues/39204/events | https://github.com/huggingface/transformers/issues/39204 | 3,200,155,889 | I_kwDOCUB6oc6-voDx | 39,204 | When creating a Trainer object for a MixedModel, the initialization tries to access attribute get_base_model (which does not exist) rather than model | {
"login": "rluss",
"id": 3778820,
"node_id": "MDQ6VXNlcjM3Nzg4MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3778820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rluss",
"html_url": "https://github.com/rluss",
"followers_url": "https://api.github.com/users/rluss/followers",
"following_url": "https://api.github.com/users/rluss/following{/other_user}",
"gists_url": "https://api.github.com/users/rluss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rluss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rluss/subscriptions",
"organizations_url": "https://api.github.com/users/rluss/orgs",
"repos_url": "https://api.github.com/users/rluss/repos",
"events_url": "https://api.github.com/users/rluss/events{/privacy}",
"received_events_url": "https://api.github.com/users/rluss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-03T17:43:57 | 2025-07-16T13:51:32 | 2025-07-16T13:51:31 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-5.14.0-503.21.1.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.12.0
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: 0.16.7
- PyTorch version (GPU?): 2.7.0+cu126 (True)
- Tensorflow version (GPU?): 2.19.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
@zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from peft import PeftConfig, PeftModel, get_peft_model, LoraConfig, TaskType
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import Trainer, TrainingArguments, DataCollatorWithPadding
import datasets
import torch
import os
# get base model and tokenizer
model_id = "google/gemma-2-2b-it"
access_token = os.getenv("HF_ACCESS_KEY", None)
model = AutoModelForSequenceClassification.from_pretrained(model_id, num_labels=2, token=access_token)
tokenizer = AutoTokenizer.from_pretrained(model_id, token=access_token)
peft_config = LoraConfig(
r=8,
task_type=TaskType.CAUSAL_LM,
target_modules=["q_proj", "v_proj"]
)
model = get_peft_model(model, peft_config, adapter_name='mrpc1', mixed=True)
_ = model.add_adapter(adapter_name='mrpc2', peft_config=peft_config)
raw_datasets = datasets.load_dataset("glue", "sst2")
def tokenize_function(example):
return tokenizer(example["sentence"], truncation=True)
# Tokenize the entire dataset
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
training_args = TrainingArguments("sst2-finetuned-model")
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
```
### Expected behavior
I would expect this code to run sucessfully so that I can then run `trainer.train()`. I pasted the traceback to the error below. I believe it is related to the fact that PeftMixedModel does not provide a `get_base_model` method. See https://github.com/huggingface/transformers/blob/51f94ea06d19a6308c61bbb4dc97c40aabd12bad/src/transformers/trainer.py#L919 where the comment "# PeftMixedModel do not provide a `get_base_model` method" remains in the code. I believe the same fix there was not handled where my code hit an error which is at https://github.com/huggingface/transformers/blob/51f94ea06d19a6308c61bbb4dc97c40aabd12bad/src/transformers/trainer.py#L637. I was able to locally fix this for myself at L637 by replacing `else unwrapped_model.get_base_model().forward` with
`else (unwrapped_model.get_base_model().forward if hasattr(unwrapped_model, "get_base_model") else unwrapped_model.base_model.model.forward)`
Traceback (most recent call last):
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/peft/mixed_model.py", line 197, in __getattr__
return super().__getattr__(name) # defer to nn.Module's logic
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__
raise AttributeError(
AttributeError: 'PeftMixedModel' object has no attribute 'get_base_model'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/peft/tuners/mixed/model.py", line 192, in __getattr__
return super().__getattr__(name) # defer to nn.Module's logic
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__
raise AttributeError(
AttributeError: 'MixedModel' object has no attribute 'get_base_model'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/dccstor/rluss1/Research_Projects/PrincipledAI/Orchestrators/lorasub/repos/trainer_expts/temp/HF_peftmixed_model_test.py", line 32, in <module>
trainer = Trainer(
^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/transformers/trainer.py", line 636, in __init__
else unwrapped_model.get_base_model().forward
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/peft/mixed_model.py", line 199, in __getattr__
return getattr(self.base_model, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/peft/tuners/mixed/model.py", line 194, in __getattr__
return getattr(self.model, name)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/dccstor/rluss1/envs/trainer8/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1940, in __getattr__
raise AttributeError(
AttributeError: 'Gemma2ForSequenceClassification' object has no attribute 'get_base_model'. Did you mean: 'base_model'?
``` | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39204/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39203 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39203/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39203/comments | https://api.github.com/repos/huggingface/transformers/issues/39203/events | https://github.com/huggingface/transformers/issues/39203 | 3,199,797,671 | I_kwDOCUB6oc6-uQmn | 39,203 | Qwen3 MOE models w/non-empty `mlp_only_layers` fail when `output_router_logits=True` | {
"login": "ntenenz",
"id": 8411908,
"node_id": "MDQ6VXNlcjg0MTE5MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8411908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntenenz",
"html_url": "https://github.com/ntenenz",
"followers_url": "https://api.github.com/users/ntenenz/followers",
"following_url": "https://api.github.com/users/ntenenz/following{/other_user}",
"gists_url": "https://api.github.com/users/ntenenz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ntenenz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntenenz/subscriptions",
"organizations_url": "https://api.github.com/users/ntenenz/orgs",
"repos_url": "https://api.github.com/users/ntenenz/repos",
"events_url": "https://api.github.com/users/ntenenz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ntenenz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-03T15:28:56 | 2025-08-18T08:03:27 | 2025-08-18T08:03:27 | NONE | null | null | null | null | ### System Info
MLPs insert a `None` into the router logit list. Therefore, when enabled in the model, they need to be filtered out either before or within the aux_loss function.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("Qwen/Qwen3-30B-A3B") # or any qwen3_moe model
config.update({"mlp_only_layers": [0]}) # or any non-empty list
model = AutoModelForCausalLM.from_config(config)
_ = model(INPUT_TOKENS, output_router_logits=True) # raises an error in the aux_loss function
```
### Expected behavior
Model should output router logits and not raise an exception. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39203/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39202 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39202/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39202/comments | https://api.github.com/repos/huggingface/transformers/issues/39202/events | https://github.com/huggingface/transformers/issues/39202 | 3,199,668,717 | I_kwDOCUB6oc6-txHt | 39,202 | Naming incosistencies of `PreTrained*` classes. | {
"login": "vitormbesen",
"id": 158235180,
"node_id": "U_kgDOCW56LA",
"avatar_url": "https://avatars.githubusercontent.com/u/158235180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitormbesen",
"html_url": "https://github.com/vitormbesen",
"followers_url": "https://api.github.com/users/vitormbesen/followers",
"following_url": "https://api.github.com/users/vitormbesen/following{/other_user}",
"gists_url": "https://api.github.com/users/vitormbesen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitormbesen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitormbesen/subscriptions",
"organizations_url": "https://api.github.com/users/vitormbesen/orgs",
"repos_url": "https://api.github.com/users/vitormbesen/repos",
"events_url": "https://api.github.com/users/vitormbesen/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitormbesen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-03T14:50:53 | 2025-07-03T14:52:02 | null | NONE | null | null | null | null | ### Feature request
Add an alias `PreTrainedConfig` for the class `PretrainedConfig`, to follow PascalCase convention more closely, and to mirror other `PreTrained*` classes.
### Motivation
`PreTrainedModel`, `PreTrainedTokenizer` follow the PascalCase naming convention. However, `PretrainedConfig` does not. This inconsistency, though minor, is inconvenient.
### Your contribution
Add the line `PreTrainedConfig = PretrainedConfig` and submit a PR. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39202/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39201 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39201/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39201/comments | https://api.github.com/repos/huggingface/transformers/issues/39201/events | https://github.com/huggingface/transformers/issues/39201 | 3,199,650,721 | I_kwDOCUB6oc6-tsuh | 39,201 | No or astronomical loss in `ModernBertForMultipleChoice` | {
"login": "netique",
"id": 34926852,
"node_id": "MDQ6VXNlcjM0OTI2ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34926852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netique",
"html_url": "https://github.com/netique",
"followers_url": "https://api.github.com/users/netique/followers",
"following_url": "https://api.github.com/users/netique/following{/other_user}",
"gists_url": "https://api.github.com/users/netique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netique/subscriptions",
"organizations_url": "https://api.github.com/users/netique/orgs",
"repos_url": "https://api.github.com/users/netique/repos",
"events_url": "https://api.github.com/users/netique/events{/privacy}",
"received_events_url": "https://api.github.com/users/netique/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-03T14:44:59 | 2025-07-04T17:53:26 | 2025-07-04T17:12:02 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.54.0.dev0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): 2.18.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (gpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-40GB
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
As there is no implementation for the multiple-choice question answering task for ModernBERT, I have tried to do my best and came up with the class below. However, most of the time during finetuning, the output from `self.classifier` is full of `nan`s, sometimes the values are there but lead to astronomical losses. I used the preprocessing function from https://huggingface.co/docs/transformers/tasks/multiple_choice with SWAG dataset as well.
I think it could be linked with the issue #38720 or #38982 (but I didn't use any grad. accumulation)
```python
class ModernBertForMultipleQuestion(ModernBertPreTrainedModel):
def __init__(self, config: ModernBertConfig):
super().__init__(config)
self.model = ModernBertModel(config)
self.head = ModernBertPredictionHead(config)
self.drop = torch.nn.Dropout(config.classifier_dropout)
self.classifier = nn.Linear(config.hidden_size, 1) # only one option is correct
self.post_init()
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
sliding_window_mask: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
**kwargs,
) -> Union[tuple[torch.Tensor], MultipleChoiceModelOutput]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the multiple choice classification loss. Indices should be in `[0, ...,
num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See
`input_ids` above)
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]
input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None
attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None
position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None
inputs_embeds = (
inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))
if inputs_embeds is not None
else None
)
self._maybe_set_compile()
outputs = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
sliding_window_mask=sliding_window_mask,
position_ids=position_ids,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_state = outputs[0]
if self.config.classifier_pooling == "cls": # need to edit Config, but we receive cls here for free
last_hidden_state = last_hidden_state[:, 0]
elif self.config.classifier_pooling == "mean":
last_hidden_state = (last_hidden_state * attention_mask.unsqueeze(-1)).sum(dim=1) / attention_mask.sum(
dim=1, keepdim=True
)
pooled_output = self.head(last_hidden_state)
pooled_output = self.drop(pooled_output)
logits = self.classifier(pooled_output)
reshaped_logits = logits.view(-1, num_choices)
loss = None
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(reshaped_logits, labels)
if not return_dict:
output = (reshaped_logits,) + outputs[1:]
return ((loss,) + output) if loss is not None else output
return MultipleChoiceModelOutput(
loss=loss,
logits=reshaped_logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
```
### Expected behavior
1. There should be stable training
2. Official `ModernBertForMultipleChoice` should be implemented | {
"login": "netique",
"id": 34926852,
"node_id": "MDQ6VXNlcjM0OTI2ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34926852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netique",
"html_url": "https://github.com/netique",
"followers_url": "https://api.github.com/users/netique/followers",
"following_url": "https://api.github.com/users/netique/following{/other_user}",
"gists_url": "https://api.github.com/users/netique/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netique/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netique/subscriptions",
"organizations_url": "https://api.github.com/users/netique/orgs",
"repos_url": "https://api.github.com/users/netique/repos",
"events_url": "https://api.github.com/users/netique/events{/privacy}",
"received_events_url": "https://api.github.com/users/netique/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39201/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39200 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39200/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39200/comments | https://api.github.com/repos/huggingface/transformers/issues/39200/events | https://github.com/huggingface/transformers/pull/39200 | 3,199,646,075 | PR_kwDOCUB6oc6dSIDb | 39,200 | fix: HWIO to OIHW | {
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T14:43:20 | 2025-07-25T17:23:15 | 2025-07-25T17:23:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39200",
"html_url": "https://github.com/huggingface/transformers/pull/39200",
"diff_url": "https://github.com/huggingface/transformers/pull/39200.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39200.patch",
"merged_at": "2025-07-25T17:23:15"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39200/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39199 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39199/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39199/comments | https://api.github.com/repos/huggingface/transformers/issues/39199/events | https://github.com/huggingface/transformers/pull/39199 | 3,199,492,615 | PR_kwDOCUB6oc6dRnBR | 39,199 | Fix errors when use verl to train GLM4.1v model | {
"login": "kaln27",
"id": 86989360,
"node_id": "MDQ6VXNlcjg2OTg5MzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/86989360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaln27",
"html_url": "https://github.com/kaln27",
"followers_url": "https://api.github.com/users/kaln27/followers",
"following_url": "https://api.github.com/users/kaln27/following{/other_user}",
"gists_url": "https://api.github.com/users/kaln27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaln27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaln27/subscriptions",
"organizations_url": "https://api.github.com/users/kaln27/orgs",
"repos_url": "https://api.github.com/users/kaln27/repos",
"events_url": "https://api.github.com/users/kaln27/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaln27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-03T13:54:24 | 2025-07-09T09:56:24 | 2025-07-08T09:39:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39199",
"html_url": "https://github.com/huggingface/transformers/pull/39199",
"diff_url": "https://github.com/huggingface/transformers/pull/39199.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39199.patch",
"merged_at": "2025-07-08T09:39:31"
} | * Support glm4v load from AutoModelForVision2Seq
* Set glm4v model's _checkpoint_conversion_mapping attribute from None to empty dict {}
# What does this PR do?
When use [verl](https://github.com/volcengine/verl) to train GLM4.1v model with GRPO, there are several small errors.
Here is how to fix them:
- support glm4v load using AutoModelForVision2Seq
- verl treat _checkpoint_conversion_mapping as a dict. But right now is None, which will abort the program. I also found that almost every model which don't need checkpoint convert have a empty dict.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39199/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39198 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39198/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39198/comments | https://api.github.com/repos/huggingface/transformers/issues/39198/events | https://github.com/huggingface/transformers/pull/39198 | 3,199,023,623 | PR_kwDOCUB6oc6dQA6B | 39,198 | CI workflow for performed test regressions | {
"login": "ahadnagy",
"id": 21314428,
"node_id": "MDQ6VXNlcjIxMzE0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/21314428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahadnagy",
"html_url": "https://github.com/ahadnagy",
"followers_url": "https://api.github.com/users/ahadnagy/followers",
"following_url": "https://api.github.com/users/ahadnagy/following{/other_user}",
"gists_url": "https://api.github.com/users/ahadnagy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahadnagy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahadnagy/subscriptions",
"organizations_url": "https://api.github.com/users/ahadnagy/orgs",
"repos_url": "https://api.github.com/users/ahadnagy/repos",
"events_url": "https://api.github.com/users/ahadnagy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahadnagy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T11:18:14 | 2025-07-16T02:20:02 | 2025-07-16T02:20:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39198",
"html_url": "https://github.com/huggingface/transformers/pull/39198",
"diff_url": "https://github.com/huggingface/transformers/pull/39198.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39198.patch",
"merged_at": "2025-07-16T02:20:02"
} | # What does this PR do?
This PR adds a new workflow to the CI that dumps a report about the differences in performed tests. We noticed that the test counts sometimes fluctuate quite a bit between CI runs, and this report will help in narrowing down the root cause and find regressions (removed test, catastrophic failure during the execution, CI flakiness, runner error etc. ).
The report looks like the following:
```
=== Diff for job: albert ===
--- Absent in current run:
- SKIPPED [1] tests/test_modeling_common.py:4029
- SKIPPED [1] tests/test_modeling_common.py:4085
- SKIPPED [1] tests/test_modeling_common.py:4162
- SKIPPED [1] tests/test_modeling_common.py:4233
+++ Appeared in current run:
+ PASSED tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids
+ PASSED tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_flash_attention_2_padding_matches_padding_free_with_position_ids_and_fa_kwargs
+ PASSED tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_flash_attn_2_fp32_ln
+ PASSED tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_flash_attn_2_from_config
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39198/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39197 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39197/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39197/comments | https://api.github.com/repos/huggingface/transformers/issues/39197/events | https://github.com/huggingface/transformers/pull/39197 | 3,198,977,500 | PR_kwDOCUB6oc6dP2rp | 39,197 | Granite speech speedups | {
"login": "avihu111",
"id": 39214195,
"node_id": "MDQ6VXNlcjM5MjE0MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/39214195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avihu111",
"html_url": "https://github.com/avihu111",
"followers_url": "https://api.github.com/users/avihu111/followers",
"following_url": "https://api.github.com/users/avihu111/following{/other_user}",
"gists_url": "https://api.github.com/users/avihu111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avihu111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avihu111/subscriptions",
"organizations_url": "https://api.github.com/users/avihu111/orgs",
"repos_url": "https://api.github.com/users/avihu111/repos",
"events_url": "https://api.github.com/users/avihu111/events{/privacy}",
"received_events_url": "https://api.github.com/users/avihu111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T11:01:34 | 2025-07-09T21:10:01 | 2025-07-09T21:09:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39197",
"html_url": "https://github.com/huggingface/transformers/pull/39197",
"diff_url": "https://github.com/huggingface/transformers/pull/39197.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39197.patch",
"merged_at": "2025-07-09T21:09:51"
} | # What does this PR do?
This PR speeds up Granite speech by the following changes:
1. Register attention_dist to avoid cpu-to-gpu transfer every layer.
2. `pad_sequence` is much faster than per-sample-padding + concat.
3. Avoid returning audio to cpu when using a compute device.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @eustlb can you give that a look 🙏
CC: @gsaon @alex-jw-brooks @avishaiElmakies | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39197/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39197/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39196 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39196/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39196/comments | https://api.github.com/repos/huggingface/transformers/issues/39196/events | https://github.com/huggingface/transformers/pull/39196 | 3,198,802,749 | PR_kwDOCUB6oc6dPQIJ | 39,196 | fix typo in Gemma3n notes | {
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T09:59:29 | 2025-07-07T12:41:38 | 2025-07-07T12:41:33 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39196",
"html_url": "https://github.com/huggingface/transformers/pull/39196",
"diff_url": "https://github.com/huggingface/transformers/pull/39196.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39196.patch",
"merged_at": "2025-07-07T12:41:33"
} | # What does this PR do?
Fixes # (issue)
fix typo in Gemma3n notes
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39196/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39196/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39195 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39195/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39195/comments | https://api.github.com/repos/huggingface/transformers/issues/39195/events | https://github.com/huggingface/transformers/pull/39195 | 3,198,801,798 | PR_kwDOCUB6oc6dPP68 | 39,195 | Expectations re-order and corrected FA3 skip | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T09:59:09 | 2025-07-07T09:42:33 | 2025-07-07T09:42:33 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39195",
"html_url": "https://github.com/huggingface/transformers/pull/39195",
"diff_url": "https://github.com/huggingface/transformers/pull/39195.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39195.patch",
"merged_at": "2025-07-07T09:42:33"
} | This PR changes the order of priority of `Expectations` to prefer the default expectation over the cross-device match as discussed with @ydshieh . It also changes the decorator of a FA3 test from `require_flash_attn` to `require_flash_attn_3` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39195/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39194 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39194/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39194/comments | https://api.github.com/repos/huggingface/transformers/issues/39194/events | https://github.com/huggingface/transformers/pull/39194 | 3,198,781,456 | PR_kwDOCUB6oc6dPLfs | 39,194 | Add packed tensor format support for flex/sdpa/eager through the mask! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-03T09:52:35 | 2025-07-08T10:03:29 | 2025-07-04T07:01:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39194",
"html_url": "https://github.com/huggingface/transformers/pull/39194",
"diff_url": "https://github.com/huggingface/transformers/pull/39194.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39194.patch",
"merged_at": "2025-07-04T07:01:57"
} | # What does this PR do?
As per the title.
```python
import torch
from transformers import AutoModelForCausalLM
from transformers.masking_utils import create_causal_mask
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B", torch_dtype=torch.float16)
batch_size = 1
sequence_length = 10
cache_position = torch.arange(sequence_length)
position_ids = torch.tensor([[0,1,2,3,0,1,0,1,2,3]]) # This corresponds to 3 packed sequences
attention_mask = create_causal_mask(
config=model.config,
# we only need batch size, seq_length and dtype here - we don't care about the values of the embeddings
input_embeds=torch.empty((batch_size, sequence_length), dtype=model.dtype),
attention_mask=None,
cache_position=cache_position,
past_key_values=None,
position_ids=position_ids,
)
attention_mask
>>> tensor([[[[ True, False, False, False, False, False, False, False, False, False],
[ True, True, False, False, False, False, False, False, False, False],
[ True, True, True, False, False, False, False, False, False, False],
[ True, True, True, True, False, False, False, False, False, False],
[False, False, False, False, True, False, False, False, False, False],
[False, False, False, False, True, True, False, False, False, False],
[False, False, False, False, False, False, True, False, False, False],
[False, False, False, False, False, False, True, True, False, False],
[False, False, False, False, False, False, True, True, True, False],
[False, False, False, False, False, False, True, True, True, True]]]])
```
<img width="542" alt="Screenshot 2025-07-03 at 11 56 22" src="https://github.com/user-attachments/assets/d6b2b62f-5937-4afb-a36c-e9f510d4d6cd" />
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39194/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39194/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39193 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39193/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39193/comments | https://api.github.com/repos/huggingface/transformers/issues/39193/events | https://github.com/huggingface/transformers/pull/39193 | 3,198,739,738 | PR_kwDOCUB6oc6dPCY3 | 39,193 | fix(pipelines): QA pipeline returns fewer than top_k results in batch mode | {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T09:38:40 | 2025-07-17T08:24:30 | 2025-07-17T08:24:30 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39193",
"html_url": "https://github.com/huggingface/transformers/pull/39193",
"diff_url": "https://github.com/huggingface/transformers/pull/39193.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39193.patch",
"merged_at": "2025-07-17T08:24:30"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #38984
This PR fixes a bug in the `QuestionAnsweringPipeline` where it could return fewer than the requested top_k answers when processing long contexts or batched inputs. The original implementation processed each context chunk independently, asking for only the top_k best spans from each chunk before aggregation. This approach was flawed because if the best candidates within a chunk were later invalidated or identified as duplicates of an answer from another chunk, the pipeline had no other options to fall back on, resulting in an insufficient number of final answers. The fix implements a more robust strategy by first over-fetching a much larger pool of candidates from every chunk, then aggregating this global list, and only then sorting, merging, and selecting the final top_k answers, which guarantees a sufficient number of valid candidates to produce a reliable and complete result.
```py
import transformers
architecture = "csarron/mobilebert-uncased-squad-v2"
tokenizer = transformers.AutoTokenizer.from_pretrained(architecture, low_cpu_mem_usage=True)
model = transformers.MobileBertForQuestionAnswering.from_pretrained(
architecture, low_cpu_mem_usage=True
)
pipeline = transformers.pipeline(task="question-answering", model=model, tokenizer=tokenizer)
data = [
{'question': ['What color is it?', 'How do the people go?', "What does the 'wolf' howl at?"],
'context': [
"Some people said it was green but I know that it's pink.",
'The people on the bus go up and down. Up and down.',
"The pack of 'wolves' stood on the cliff and a 'lone wolf' howled at the moon for hours."
]}
]
# prediction result is wrong
pipeline(data, top_k=2, max_answer_len=5)
```
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], [{'score': 0.3008899986743927, 'start': 25, 'end': 36, 'answer': 'up and down'}, {'score': 0.12070021033287048, 'start': 38, 'end': 49, 'answer': 'Up and down'}], [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
```
## Who can review?
@Rocketknight1
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39193/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39192 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39192/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39192/comments | https://api.github.com/repos/huggingface/transformers/issues/39192/events | https://github.com/huggingface/transformers/pull/39192 | 3,198,703,139 | PR_kwDOCUB6oc6dO6aR | 39,192 | fix(pipelines): QA pipeline returns fewer than top_k results in batch mode | {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T09:26:28 | 2025-07-03T09:34:55 | 2025-07-03T09:34:55 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39192",
"html_url": "https://github.com/huggingface/transformers/pull/39192",
"diff_url": "https://github.com/huggingface/transformers/pull/39192.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39192.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #38984
This PR fixes a bug in the `QuestionAnsweringPipeline` where it could return fewer than the requested top_k answers when processing long contexts or batched inputs. The original implementation processed each context chunk independently, asking for only the top_k best spans from each chunk before aggregation. This approach was flawed because if the best candidates within a chunk were later invalidated or identified as duplicates of an answer from another chunk, the pipeline had no other options to fall back on, resulting in an insufficient number of final answers. The fix implements a more robust strategy by first over-fetching a much larger pool of candidates from every chunk, then aggregating this global list, and only then sorting, merging, and selecting the final top_k answers, which guarantees a sufficient number of valid candidates to produce a reliable and complete result.
```py
import transformers
architecture = "csarron/mobilebert-uncased-squad-v2"
tokenizer = transformers.AutoTokenizer.from_pretrained(architecture, low_cpu_mem_usage=True)
model = transformers.MobileBertForQuestionAnswering.from_pretrained(
architecture, low_cpu_mem_usage=True
)
pipeline = transformers.pipeline(task="question-answering", model=model, tokenizer=tokenizer)
data = [
{'question': ['What color is it?', 'How do the people go?', "What does the 'wolf' howl at?"],
'context': [
"Some people said it was green but I know that it's pink.",
'The people on the bus go up and down. Up and down.',
"The pack of 'wolves' stood on the cliff and a 'lone wolf' howled at the moon for hours."
]}
]
# prediction result is wrong
pipeline(data, top_k=2, max_answer_len=5)
```
```
[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], [{'score': 0.3008899986743927, 'start': 25, 'end': 36, 'answer': 'up and down'}, {'score': 0.12070021033287048, 'start': 38, 'end': 49, 'answer': 'Up and down'}], [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]
```
## Who can review?
@Rocketknight1
| {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39192/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39191 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39191/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39191/comments | https://api.github.com/repos/huggingface/transformers/issues/39191/events | https://github.com/huggingface/transformers/issues/39191 | 3,198,631,371 | I_kwDOCUB6oc6-pz3L | 39,191 | 🐛 Bug Report: Accelerate config to disable torch dynamo is ignored by transformers automatic compilation | {
"login": "leobianco",
"id": 26525286,
"node_id": "MDQ6VXNlcjI2NTI1Mjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/26525286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leobianco",
"html_url": "https://github.com/leobianco",
"followers_url": "https://api.github.com/users/leobianco/followers",
"following_url": "https://api.github.com/users/leobianco/following{/other_user}",
"gists_url": "https://api.github.com/users/leobianco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leobianco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leobianco/subscriptions",
"organizations_url": "https://api.github.com/users/leobianco/orgs",
"repos_url": "https://api.github.com/users/leobianco/repos",
"events_url": "https://api.github.com/users/leobianco/events{/privacy}",
"received_events_url": "https://api.github.com/users/leobianco/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T09:03:44 | 2025-08-11T08:03:03 | 2025-08-11T08:03:03 | NONE | null | null | null | null | ### System Info
- transformers version: 4.53.0
- platform: Debian 6.1.128-1 x86_64 GNU/Linux
- python version: 3.11
- torch version: 2.7.1
- trl version: 0.19.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, DeepSpeed ZeRO 3
### Who can help?
@SunMarc @gante
### Information
The problem arises when using:
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
The tasks I am working on are:
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (see below)
### Reproduction
Although my accelerate config was explicitly set to disable torch dynamo, I found that transformers (after commit `ee37bf0`) still enabled automatic compilation (e.g., torch.compile/torch._dynamo) during the forward pass. This led to unexpected compilation and the following error when using the RLOO trainer from the TRL library to finetune a Gemma 2 model:
```python
[rank5]: Traceback (most recent call last):
[rank5]: File "/home/leo/perl_hallucination/perl.py", line 120, in <module>
[rank5]: main()
[rank5]: File "/home/leo/perl_hallucination/perl.py", line 109, in main
[rank5]: trainer.train()
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/trl/trainer/rloo_trainer.py", line 337, in train
[rank5]: query_responses, logitss = batch_generation(
[rank5]: ^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/trl/trainer/utils.py", line 1430, in batch_generation
[rank5]: query_response, logits = generate(
[rank5]: ^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/trl/trainer/utils.py", line 1404, in generate
[rank5]: output = lm_backbone.generate(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/peft/peft_model.py", line 1875, in generate
[rank5]: outputs = self.base_model.generate(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank5]: return func(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 2623, in generate
[rank5]: result = self._sample(
[rank5]: ^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 3607, in _sample
[rank5]: outputs = model_forward(**model_inputs, return_dict=True)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
[rank5]: return fn(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
[rank5]: return self._call_impl(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
[rank5]: return forward_call(*args, **kwargs)
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 1432, in __call__
[rank5]: return self._torchdynamo_orig_callable(
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 598, in __call__
[rank5]: return _compile(
[rank5]: ^^^^^^^^^
[rank5]: File "/home/leo/perl_hallucination/.venv/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 981, in _compile
[rank5]: raise FailOnRecompileLimitHit(
[rank5]: torch._dynamo.exc.FailOnRecompileLimitHit: recompile_limit reached with one_graph=True. Excessive recompilations can degrade performance due to the compilation overhead of each recompilation. To monitor recompilations, enable TORCH_LOGS=recompiles. If recompilations are expected, consider increasing torch._dynamo.config.cache_size_limit to an appropriate value.
```
[The docstring](https://github.com/huggingface/transformers/blame/8178c43112295bf8c4ef04c667efbbbfd34b8bca/src/transformers/generation/configuration_utils.py#L386-L386C2) asks us to open an issue if one wishes to disable this.
This was a bit confusing to debug, as I expected my configuration to be respected. In my view, when the accelerate config disables torch dynamo, any form of automatic compilation in downstream libraries should also be disabled by default.
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39191/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39191/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39190 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39190/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39190/comments | https://api.github.com/repos/huggingface/transformers/issues/39190/events | https://github.com/huggingface/transformers/pull/39190 | 3,198,561,805 | PR_kwDOCUB6oc6dOboc | 39,190 | adjust input and output texts for test_modeling_recurrent_gemma.py | {
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T08:40:16 | 2025-07-07T13:13:26 | 2025-07-07T13:13:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39190",
"html_url": "https://github.com/huggingface/transformers/pull/39190",
"diff_url": "https://github.com/huggingface/transformers/pull/39190.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39190.patch",
"merged_at": "2025-07-07T13:13:25"
} | In original test case, there is no semantic association between input prompt and expected text. This PR changes the input and expected output according to different platforms. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39190/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39189 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39189/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39189/comments | https://api.github.com/repos/huggingface/transformers/issues/39189/events | https://github.com/huggingface/transformers/pull/39189 | 3,198,463,563 | PR_kwDOCUB6oc6dOGir | 39,189 | Update expected values (after switching to A10) - part 4 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T08:05:03 | 2025-08-08T15:09:31 | 2025-07-03T13:13:07 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39189",
"html_url": "https://github.com/huggingface/transformers/pull/39189",
"diff_url": "https://github.com/huggingface/transformers/pull/39189.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39189.patch",
"merged_at": "2025-07-03T13:13:07"
} | # What does this PR do?
As discussed offline, will merge to move fast as it's only expected outputs updated for A10 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39189/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39188 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39188/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39188/comments | https://api.github.com/repos/huggingface/transformers/issues/39188/events | https://github.com/huggingface/transformers/issues/39188 | 3,198,327,809 | I_kwDOCUB6oc6-opwB | 39,188 | Gemma2 fall back to cpu execusion when attn_implementation='flash_attention_2' | {
"login": "Lingy12",
"id": 54443474,
"node_id": "MDQ6VXNlcjU0NDQzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/54443474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lingy12",
"html_url": "https://github.com/Lingy12",
"followers_url": "https://api.github.com/users/Lingy12/followers",
"following_url": "https://api.github.com/users/Lingy12/following{/other_user}",
"gists_url": "https://api.github.com/users/Lingy12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lingy12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lingy12/subscriptions",
"organizations_url": "https://api.github.com/users/Lingy12/orgs",
"repos_url": "https://api.github.com/users/Lingy12/repos",
"events_url": "https://api.github.com/users/Lingy12/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lingy12/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-03T07:18:59 | 2025-09-17T08:03:24 | 2025-09-17T08:03:24 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.0
- Platform: Linux-5.15.0-1076-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.4.3
- Accelerate version: 0.30.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b-it",
device_map="auto",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
It seems failed to compile the CUDA graph with the following warning. The generate output normally, but only using CPU.
```
[/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:679](https://vscode-remote+tunnel-002ba2ap-002ddgx037.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:679): UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.fwd. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
W0703 15:17:02.947000 654062 torch/_dynamo/convert_frame.py:906] [7/8] torch._dynamo hit config.cache_size_limit (8)
W0703 15:17:02.947000 654062 torch/_dynamo/convert_frame.py:906] [7/8] function: 'forward' (/usr/local/lib/python3.10/dist-packages/transformers/models/gemma2/modeling_gemma2.py:195)
W0703 15:17:02.947000 654062 torch/_dynamo/convert_frame.py:906] [7/8] last reason: 7/0: L['self'].layer_idx == 0
W0703 15:17:02.947000 654062 torch/_dynamo/convert_frame.py:906] [7/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
W0703 15:17:02.947000 654062 torch/_dynamo/convert_frame.py:906] [7/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
[/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py:84](https://vscode-remote+tunnel-002ba2ap-002ddgx037.vscode-resource.vscode-cdn.net/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py:84): UserWarning: The CUDA Graph is empty. This usually means that the graph was attempted to be captured on wrong device or stream. (Triggered internally at [/pytorch/aten/src/ATen/cuda/CUDAGraph.cpp:206](https://vscode-remote+tunnel-002ba2ap-002ddgx037.vscode-resource.vscode-cdn.net/pytorch/aten/src/ATen/cuda/CUDAGraph.cpp:206).)
super().capture_end()
```
### Expected behavior
The program should run inference on GPU. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39188/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39187 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39187/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39187/comments | https://api.github.com/repos/huggingface/transformers/issues/39187/events | https://github.com/huggingface/transformers/pull/39187 | 3,198,200,473 | PR_kwDOCUB6oc6dNOQf | 39,187 | fix xpu failures on PT 2.7 and 2.8 w/o IPEX and enable hqq cases on XPU | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T06:28:30 | 2025-07-10T02:49:43 | 2025-07-08T08:18:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39187",
"html_url": "https://github.com/huggingface/transformers/pull/39187",
"diff_url": "https://github.com/huggingface/transformers/pull/39187.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39187.patch",
"merged_at": "2025-07-08T08:18:26"
} | null | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39187/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39186 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39186/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39186/comments | https://api.github.com/repos/huggingface/transformers/issues/39186/events | https://github.com/huggingface/transformers/issues/39186 | 3,198,027,767 | I_kwDOCUB6oc6-ngf3 | 39,186 | FSDP RuntimeError: 'weight' must be 2-D | {
"login": "mukhayy",
"id": 28218767,
"node_id": "MDQ6VXNlcjI4MjE4NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/28218767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mukhayy",
"html_url": "https://github.com/mukhayy",
"followers_url": "https://api.github.com/users/mukhayy/followers",
"following_url": "https://api.github.com/users/mukhayy/following{/other_user}",
"gists_url": "https://api.github.com/users/mukhayy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mukhayy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mukhayy/subscriptions",
"organizations_url": "https://api.github.com/users/mukhayy/orgs",
"repos_url": "https://api.github.com/users/mukhayy/repos",
"events_url": "https://api.github.com/users/mukhayy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mukhayy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-03T05:02:23 | 2025-09-26T21:54:18 | 2025-07-05T06:34:23 | NONE | null | null | null | null | I am getting `'weight' must be 2-D` error when full fine tuning gemma-3-12b on 4xGPU. Couple of days back with the same configuration I was not getting this error, I verified I haven't changed anything since.
FSDP conf inside TrainingArguments()
```
fsdp="full_shard",
fsdp_config={
"auto_wrap_policy": "transformer_based_wrap",
"backward_prefetch": "backward_pre",
"cpu_ram_efficient_loading": True,
"use_orig_params": True,
"sync_module_states": True,
"transformer_layer_cls_to_wrap": ["Gemma3DecoderLayer"],
},
```
Stack Trace
```
2025-07-03 04:54:34,137 - ERROR - Traceback (most recent call last):
File "/mnt/dataset/working_fsdp.py", line 304, in main
trainer.train()
File "/opt/conda/lib/python3.12/site-packages/transformers/trainer.py", line 2207, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/trainer.py", line 2549, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/trainer.py", line 3750, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/trainer.py", line 3837, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/accelerate/utils/operations.py", line 818, in forward
return model_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/accelerate/utils/operations.py", line 806, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1083, in forward
outputs = self.model(
^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 885, in forward
inputs_embeds = self.get_input_embeddings()(llm_input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 113, in forward
return super().forward(input_ids) * self.embed_scale.to(self.weight.dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/modules/sparse.py", line 192, in forward
return F.embedding(
^^^^^^^^^^^^
File "/opt/conda/lib/python3.12/site-packages/torch/nn/functional.py", line 2546, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: 'weight' must be 2-D
```
Anyone has pointers where the issue might be?
Thanks | {
"login": "mukhayy",
"id": 28218767,
"node_id": "MDQ6VXNlcjI4MjE4NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/28218767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mukhayy",
"html_url": "https://github.com/mukhayy",
"followers_url": "https://api.github.com/users/mukhayy/followers",
"following_url": "https://api.github.com/users/mukhayy/following{/other_user}",
"gists_url": "https://api.github.com/users/mukhayy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mukhayy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mukhayy/subscriptions",
"organizations_url": "https://api.github.com/users/mukhayy/orgs",
"repos_url": "https://api.github.com/users/mukhayy/repos",
"events_url": "https://api.github.com/users/mukhayy/events{/privacy}",
"received_events_url": "https://api.github.com/users/mukhayy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39186/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39185 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39185/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39185/comments | https://api.github.com/repos/huggingface/transformers/issues/39185/events | https://github.com/huggingface/transformers/pull/39185 | 3,196,968,523 | PR_kwDOCUB6oc6dJCXP | 39,185 | [modular] Simplify logic and docstring handling | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T19:28:56 | 2025-07-07T12:52:58 | 2025-07-07T12:52:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39185",
"html_url": "https://github.com/huggingface/transformers/pull/39185",
"diff_url": "https://github.com/huggingface/transformers/pull/39185.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39185.patch",
"merged_at": "2025-07-07T12:52:57"
} | # What does this PR do?
Simplify a lot `replace_class_node` logic (I wanted to do it from a long time ago as it was all bloated and super hard to read) and simplify the docstring handling at the same time.
Basically, the idea is now to use the parent docstring if there are no docstrings, or to use the docstring in modular if any. Previously, we were doing some kind of merging between the 2 docstrings, which was bugged (see a few places in the diff where there are arg explanations after examples or similar issues), inconsistent (i.e. adding a simple arg for a Config, while convenient, was always inconsistent because the checkpoints link was never updated correctly) and surprising (this is some kind of magic that contributors were very rarely understanding (which makes sense at it had a lot of issues as explained). Moreover, with `auto_docstring` now, we should not have to redefine the parameters explanations everytime, so it makes sense to relax docstring handling.
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39185/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39184 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39184/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39184/comments | https://api.github.com/repos/huggingface/transformers/issues/39184/events | https://github.com/huggingface/transformers/pull/39184 | 3,196,531,704 | PR_kwDOCUB6oc6dHiyf | 39,184 | Better return typehints for `from_pretrained` | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8882772041,
"node_id": "LA_kwDOCUB6oc8AAAACEXRYSQ",
"url": "https://api.github.com/repos/huggingface/transformers/labels/typing",
"name": "typing",
"color": "DBA272",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-02T16:44:35 | 2025-07-03T14:22:48 | 2025-07-03T14:22:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39184",
"html_url": "https://github.com/huggingface/transformers/pull/39184",
"diff_url": "https://github.com/huggingface/transformers/pull/39184.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39184.patch",
"merged_at": "2025-07-03T14:22:48"
} | # What does this PR do?
Better typing while using from_pretrained for a specific config/processor/feature_extractor. This allows for better availability of class docs, attributes, and function signatures.
### On `main` (before)

### On branch (after)

| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39184/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39183 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39183/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39183/comments | https://api.github.com/repos/huggingface/transformers/issues/39183/events | https://github.com/huggingface/transformers/pull/39183 | 3,196,507,837 | PR_kwDOCUB6oc6dHdu0 | 39,183 | Add a 'chat' extra | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-02T16:37:36 | 2025-07-02T16:43:37 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39183",
"html_url": "https://github.com/huggingface/transformers/pull/39183",
"diff_url": "https://github.com/huggingface/transformers/pull/39183.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39183.patch",
"merged_at": null
} | Adds a new chat extra alongside the serving extra | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39183/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39182 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39182/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39182/comments | https://api.github.com/repos/huggingface/transformers/issues/39182/events | https://github.com/huggingface/transformers/pull/39182 | 3,196,498,389 | PR_kwDOCUB6oc6dHbv8 | 39,182 | Mllama fixes | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-02T16:33:37 | 2025-07-07T10:03:40 | null | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39182",
"html_url": "https://github.com/huggingface/transformers/pull/39182",
"diff_url": "https://github.com/huggingface/transformers/pull/39182.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39182.patch",
"merged_at": null
} | This PR adds the `is_causal` attribute to some Attention modules in mllama and disables FA2 for the `MllamaVisionModel` .
Some tests used to fail when `is_causal` was missing: `MllamaForCausalLMModelTest::test_flash_attn_2_fp32_ln`, `MllamaForConditionalGenerationModelTest::test_eager_matches_fa2_generate`, ...and when it was added FA2 failed for these tests, on both Mi355 and A100. This is because the vision model uses a 4D attn mask which is not supported by FA2.
Not an expert in VLM so can you chek `is_causal` is right @zucchini-nlp please?
I also added Expectations for AMD MI355. After these changes, we go from `15 failed, 185 passed, 94 skipped` to `196 passed, 98 skipped` on AMD MI355. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39182/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39182/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39181 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39181/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39181/comments | https://api.github.com/repos/huggingface/transformers/issues/39181/events | https://github.com/huggingface/transformers/pull/39181 | 3,196,454,180 | PR_kwDOCUB6oc6dHSSV | 39,181 | [`Dia`] Change ckpt path in docs | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T16:16:58 | 2025-07-03T10:02:59 | 2025-07-03T10:02:58 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39181",
"html_url": "https://github.com/huggingface/transformers/pull/39181",
"diff_url": "https://github.com/huggingface/transformers/pull/39181.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39181.patch",
"merged_at": "2025-07-03T10:02:58"
} | As per title :D | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39181/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39180 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39180/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39180/comments | https://api.github.com/repos/huggingface/transformers/issues/39180/events | https://github.com/huggingface/transformers/pull/39180 | 3,196,374,563 | PR_kwDOCUB6oc6dHBFq | 39,180 | [modular] Follow global indexing and attribute setting, and their dependencies | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T15:49:59 | 2025-07-07T12:36:45 | 2025-07-07T12:36:43 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39180",
"html_url": "https://github.com/huggingface/transformers/pull/39180",
"diff_url": "https://github.com/huggingface/transformers/pull/39180.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39180.patch",
"merged_at": "2025-07-07T12:36:43"
} | # What does this PR do?
As per the title. See the new example to better understand the issue at hand. Before, any global indexing or attribute setting to a global variable would not be followed, because they are not a "simple and standard" variable assignment. This PR solves it.
This is needed in https://github.com/huggingface/transformers/pull/35891 as well
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39180/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39179 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39179/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39179/comments | https://api.github.com/repos/huggingface/transformers/issues/39179/events | https://github.com/huggingface/transformers/pull/39179 | 3,196,364,528 | PR_kwDOCUB6oc6dG-3N | 39,179 | Update expected values (after switching to A10) - part 3 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T15:47:02 | 2025-07-03T13:01:59 | 2025-07-02T20:48:30 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39179",
"html_url": "https://github.com/huggingface/transformers/pull/39179",
"diff_url": "https://github.com/huggingface/transformers/pull/39179.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39179.patch",
"merged_at": "2025-07-02T20:48:30"
} | # What does this PR do?
As discussed offline, will merge to move fast as it's only expected outputs updated for A10 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39179/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39178 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39178/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39178/comments | https://api.github.com/repos/huggingface/transformers/issues/39178/events | https://github.com/huggingface/transformers/pull/39178 | 3,196,356,869 | PR_kwDOCUB6oc6dG9Pv | 39,178 | [serve] Model name or path should be required | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T15:44:22 | 2025-07-02T20:06:49 | 2025-07-02T20:06:47 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39178",
"html_url": "https://github.com/huggingface/transformers/pull/39178",
"diff_url": "https://github.com/huggingface/transformers/pull/39178.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39178.patch",
"merged_at": "2025-07-02T20:06:47"
} | null | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39178/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39177 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39177/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39177/comments | https://api.github.com/repos/huggingface/transformers/issues/39177/events | https://github.com/huggingface/transformers/pull/39177 | 3,196,273,506 | PR_kwDOCUB6oc6dGrWj | 39,177 | fix bug using FSDP V1 will lead to model device not properly set | {
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T15:15:08 | 2025-07-11T06:08:54 | 2025-07-07T12:47:04 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39177",
"html_url": "https://github.com/huggingface/transformers/pull/39177",
"diff_url": "https://github.com/huggingface/transformers/pull/39177.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39177.patch",
"merged_at": "2025-07-07T12:47:04"
} | In this PR: [36132](https://github.com/huggingface/transformers/pull/36132/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2366), when we use FSDP, it will not use accelerator to prepare model, which will lead to model weight not loaded to right gpu device. One example to reproduce the bug, in peft library's [sft example](https://github.com/huggingface/peft/tree/main/examples/sft), when we run cmd like
`accelerate launch --config_file "fsdp_config.yaml" train.py --seed 100 --model_name_or_path "meta-llama/Llama-2-7b-chat-hf" --dataset_name "smangrul/ultrachat-10k-chatml" --chat_template_format "chatml" --add_special_tokens False --append_concat_token False --splits "train,test" --max_seq_len 2048 --num_train_epochs 1 --logging_steps 5 --log_level "info" --logging_strategy "steps" --eval_strategy "epoch" --save_strategy "epoch" --bf16 True --packing True --learning_rate 1e-4 --lr_scheduler_type "cosine" --weight_decay 1e-4 --warmup_ratio 0.0 --max_grad_norm 1.0 --output_dir "llama-sft-lora-fsdp" --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 4 --gradient_checkpointing True --use_reentrant False --dataset_text_field "content" --use_flash_attn False --use_peft_lora True --lora_r 8 --lora_alpha 16 --lora_dropout 0.1 --lora_target_modules "q_proj,k_proj,v_proj,o_proj,up_proj,gate_proj" --use_4bit_quantization False`, it will crash and returns error
```
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/generic.py", line 943, in wrapper
[rank1]: output = func(self, *args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/modeling_llama.py", line 408, in fo
rward
[rank1]: inputs_embeds = self.embed_tokens(input_ids)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/sparse.py", line 192, in forward
[rank1]: return F.embedding(
[rank1]: ^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/dist-packages/torch/nn/functional.py", line 2546, in embedding
[rank1]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: The tensor has a non-zero number of elements, but its data is not allocated yet.
```
while using transformers 4.52.4 will not run into this issue. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39177/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39177/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39176 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39176/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39176/comments | https://api.github.com/repos/huggingface/transformers/issues/39176/events | https://github.com/huggingface/transformers/pull/39176 | 3,196,265,089 | PR_kwDOCUB6oc6dGphQ | 39,176 | Random serve fixes | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T15:12:32 | 2025-07-02T20:09:59 | 2025-07-02T20:09:58 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39176",
"html_url": "https://github.com/huggingface/transformers/pull/39176",
"diff_url": "https://github.com/huggingface/transformers/pull/39176.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39176.patch",
"merged_at": "2025-07-02T20:09:58"
} | * Index out of bounds repro
- Start server
- Open chat session using model `meta-llama/Llama-3.2-3b-Instruct`, enter `hello` in the chat
- Open another session to the same model, enter the same message `hello`
* The chat cli specifies the revision, while other clients (Jan) don't. This may result in model reloading (and memory doubling) when it's not necessary. We compare the full names now. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39176/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39175 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39175/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39175/comments | https://api.github.com/repos/huggingface/transformers/issues/39175/events | https://github.com/huggingface/transformers/issues/39175 | 3,196,165,212 | I_kwDOCUB6oc6-gZxc | 39,175 | Torch patches tracker for HPU/Gaudi | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | open | false | {
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "IlyasMoutawwakil",
"id": 57442720,
"node_id": "MDQ6VXNlcjU3NDQyNzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/57442720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IlyasMoutawwakil",
"html_url": "https://github.com/IlyasMoutawwakil",
"followers_url": "https://api.github.com/users/IlyasMoutawwakil/followers",
"following_url": "https://api.github.com/users/IlyasMoutawwakil/following{/other_user}",
"gists_url": "https://api.github.com/users/IlyasMoutawwakil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IlyasMoutawwakil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IlyasMoutawwakil/subscriptions",
"organizations_url": "https://api.github.com/users/IlyasMoutawwakil/orgs",
"repos_url": "https://api.github.com/users/IlyasMoutawwakil/repos",
"events_url": "https://api.github.com/users/IlyasMoutawwakil/events{/privacy}",
"received_events_url": "https://api.github.com/users/IlyasMoutawwakil/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-07-02T14:41:24 | 2025-10-19T13:12:11 | null | MEMBER | null | null | null | null | ### System Info
On the Gaudi3 machine that we use for CI, we observed some limitations on a low level (torch/synapseAI) :
- `torch.Tensor.masked_fill_` operation doesn't work with int64 dtype (only on Gaudi1).
- `torch.gather` (and `torch.Tensor.gather`) operation doesn't work with int64 dtype.
- `torch.scatter` (and `torch.Tensor.scatter`) operation doesn't work when `input` tensor shares memory with `src` tensor.
- `torch.linalg.cholesky` might return nans in cases where the input matrix is "barely" positive definite, resulting in nan filled samples from the multivariate normal distribution.
Patches were introduced to bypass these issues for now but we expect them to be removed when the issues are fixed in SynapseAI/torch+hpu.
@ydshieh
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39175/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39174 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39174/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39174/comments | https://api.github.com/repos/huggingface/transformers/issues/39174/events | https://github.com/huggingface/transformers/pull/39174 | 3,196,040,397 | PR_kwDOCUB6oc6dF40A | 39,174 | [glm4v] fix video inference | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T14:03:33 | 2025-07-03T05:20:42 | 2025-07-03T05:20:42 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39174",
"html_url": "https://github.com/huggingface/transformers/pull/39174",
"diff_url": "https://github.com/huggingface/transformers/pull/39174.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39174.patch",
"merged_at": "2025-07-03T05:20:41"
} | # What does this PR do?
I broke video inference in a recent PR by copying from Qwen2-VL. Comes out GLM4V doesn't have a video token and instead uses image token id repeated many times per frame
No need for patching, only the main branch is broken | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39174/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39173 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39173/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39173/comments | https://api.github.com/repos/huggingface/transformers/issues/39173/events | https://github.com/huggingface/transformers/pull/39173 | 3,195,861,829 | PR_kwDOCUB6oc6dFSIp | 39,173 | Reduce Glm4v model test size significantly | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T13:06:56 | 2025-07-02T15:49:12 | 2025-07-02T13:55:06 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39173",
"html_url": "https://github.com/huggingface/transformers/pull/39173",
"diff_url": "https://github.com/huggingface/transformers/pull/39173.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39173.patch",
"merged_at": "2025-07-02T13:55:06"
} | # What does this PR do?
As per the title. The model test was about 180M params, making the CI quite slow. cc @ydshieh | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39173/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39172 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39172/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39172/comments | https://api.github.com/repos/huggingface/transformers/issues/39172/events | https://github.com/huggingface/transformers/pull/39172 | 3,195,520,568 | PR_kwDOCUB6oc6dEI4a | 39,172 | fix Glm4v batch videos forward | {
"login": "Kuangdd01",
"id": 82590017,
"node_id": "MDQ6VXNlcjgyNTkwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/82590017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kuangdd01",
"html_url": "https://github.com/Kuangdd01",
"followers_url": "https://api.github.com/users/Kuangdd01/followers",
"following_url": "https://api.github.com/users/Kuangdd01/following{/other_user}",
"gists_url": "https://api.github.com/users/Kuangdd01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kuangdd01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kuangdd01/subscriptions",
"organizations_url": "https://api.github.com/users/Kuangdd01/orgs",
"repos_url": "https://api.github.com/users/Kuangdd01/repos",
"events_url": "https://api.github.com/users/Kuangdd01/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kuangdd01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-02T11:07:12 | 2025-07-10T08:46:21 | 2025-07-10T08:44:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39172",
"html_url": "https://github.com/huggingface/transformers/pull/39172",
"diff_url": "https://github.com/huggingface/transformers/pull/39172.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39172.patch",
"merged_at": "2025-07-10T08:44:29"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the issues of video_processing and get_video_features for GLM4V.
Have tested with following scripts
```python
import torch
from transformers import AutoProcessor, Glm4vForConditionalGeneration
from PIL import Image
import numpy as np
import cv2
import os
from dataclasses import dataclass
from transformers.video_utils import VideoMetadata
def prepare_video_metadata(videos):
video_metadata = []
for video in videos:
if isinstance(video, list):
num_frames = len(video)
elif hasattr(video, "shape"):
if len(video.shape) == 4: # (T, H, W, C)
num_frames = video.shape[0]
else:
num_frames = 1
else:
num_frames = 8
print("eeeeee")
metadata = {
"fps": 2,
"duration": num_frames / 2,
"total_frames": num_frames,
}
video_metadata.append(metadata)
return video_metadata
def test_video_processing(video_path_list, num_frames=4):
selected_frames = []
for video_path in video_path_list:
cap = cv2.VideoCapture(video_path)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
print(f"Total frames: {frame_count}")
video_metadata = []
for video_path in video_path_list:
temp_frames = []
cap = cv2.VideoCapture(video_path)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
step = max(frame_count // num_frames, 1)
for i in range(0, frame_count, step):
cap.set(cv2.CAP_PROP_POS_FRAMES, i)
ret, frame = cap.read()
if not ret:
continue
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
pil_img = Image.fromarray(frame_rgb)
temp_frames.append(pil_img)
selected_frames.append(temp_frames)
video_metadata = prepare_video_metadata(selected_frames)
video_inputs = processor.video_processor(videos=selected_frames, video_metadata=video_metadata)
questions = ["What kind of dog is this?", "Describe the background."]
messages_batch = [
[
{
"role": "user",
"content": [
{"type": "video"},
{"type": "text", "text": question},
],
}
]
for question in questions
]
texts = [
processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
for msg in messages_batch
]
inputs_batch = processor(text=texts, videos=selected_frames, video_metadata=video_metadata, return_tensors="pt", padding=True)
print(processor.batch_decode(inputs_batch['input_ids'])[0])
rope_pos, deltas = model.model.get_rope_index(
inputs_batch["input_ids"],
None,
inputs_batch["video_grid_thw"],
inputs_batch["attention_mask"]
)
print(rope_pos.shape, "\n", deltas)
processor_name = "THUDM/GLM-4.1V-9B-Thinking"
processor = AutoProcessor.from_pretrained(processor_name)
model = Glm4vForConditionalGeneration.from_pretrained(processor_name)
if __name__ == "__main__":
# image_path = "./data/mllm_demo_data/1.jpg"
video_path_1 = "./data/mllm_demo_data/1.mp4"
video_path_2 = "./data/mllm_demo_data/2.avi"
test_video_processing([video_path_1, video_path_2])
```
For forward logits checking, @zRzRzRzRzRzRzR
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp cc @zRzRzRzRzRzRzR
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39172/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39171 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39171/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39171/comments | https://api.github.com/repos/huggingface/transformers/issues/39171/events | https://github.com/huggingface/transformers/pull/39171 | 3,195,198,578 | PR_kwDOCUB6oc6dDFL_ | 39,171 | Make _compute_dynamic_ntk_parameters exportable | {
"login": "xadupre",
"id": 22452781,
"node_id": "MDQ6VXNlcjIyNDUyNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/22452781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xadupre",
"html_url": "https://github.com/xadupre",
"followers_url": "https://api.github.com/users/xadupre/followers",
"following_url": "https://api.github.com/users/xadupre/following{/other_user}",
"gists_url": "https://api.github.com/users/xadupre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xadupre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xadupre/subscriptions",
"organizations_url": "https://api.github.com/users/xadupre/orgs",
"repos_url": "https://api.github.com/users/xadupre/repos",
"events_url": "https://api.github.com/users/xadupre/events{/privacy}",
"received_events_url": "https://api.github.com/users/xadupre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 7305045262,
"node_id": "LA_kwDOCUB6oc8AAAABs2olDg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/ExecuTorch",
"name": "ExecuTorch",
"color": "33CAA3",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-02T09:14:19 | 2025-07-07T12:48:38 | 2025-07-07T12:48:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39171",
"html_url": "https://github.com/huggingface/transformers/pull/39171",
"diff_url": "https://github.com/huggingface/transformers/pull/39171.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39171.patch",
"merged_at": "2025-07-07T12:48:31"
} | # What does this PR do?
Function ``def _compute_dynamic_ntk_parameters`` is not exportable (with torch.export.export) due to a control flow, this PR rewrites it with torch.maximum.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39171/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39170 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39170/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39170/comments | https://api.github.com/repos/huggingface/transformers/issues/39170/events | https://github.com/huggingface/transformers/pull/39170 | 3,194,963,050 | PR_kwDOCUB6oc6dCSH7 | 39,170 | Don't send new comment if the previous one is less than 30 minutes (unless the content is changed) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T07:55:14 | 2025-07-07T12:43:52 | 2025-07-07T12:43:50 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39170",
"html_url": "https://github.com/huggingface/transformers/pull/39170",
"diff_url": "https://github.com/huggingface/transformers/pull/39170.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39170.patch",
"merged_at": "2025-07-07T12:43:50"
} | # What does this PR do?
https://github.com/huggingface/transformers/pull/39100#issuecomment-3026536511
In #39100, the previous comment is deleted and a new one is sent. But without users refreshing the PR page, the previous comment still remains on the page (until page refresh) which seems annoying.
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39170/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39169 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39169/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39169/comments | https://api.github.com/repos/huggingface/transformers/issues/39169/events | https://github.com/huggingface/transformers/issues/39169 | 3,194,939,829 | I_kwDOCUB6oc6-bum1 | 39,169 | Using Gemma3n with text-only generation requires image dependencies | {
"login": "marianheinsen",
"id": 43065575,
"node_id": "MDQ6VXNlcjQzMDY1NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/43065575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marianheinsen",
"html_url": "https://github.com/marianheinsen",
"followers_url": "https://api.github.com/users/marianheinsen/followers",
"following_url": "https://api.github.com/users/marianheinsen/following{/other_user}",
"gists_url": "https://api.github.com/users/marianheinsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marianheinsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marianheinsen/subscriptions",
"organizations_url": "https://api.github.com/users/marianheinsen/orgs",
"repos_url": "https://api.github.com/users/marianheinsen/repos",
"events_url": "https://api.github.com/users/marianheinsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/marianheinsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-02T07:46:43 | 2025-08-01T08:14:26 | 2025-08-01T08:14:26 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.8
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to use the Gemma3n model in a text-only generation pipeline (without any multimodal inputs). I'm using the Gemma3nForCausalLM because it has only a language modeling head. But when running the script, it fails with an ImportError stating that `AutoImageProcessor` requires the PIL and timm libraries to work. How can I run Gemma3n for text-generation without those image-related dependencies?
```python
from transformers import AutoTokenizer, Gemma3nForCausalLM
import torch
model_id = "google/gemma-3n-e4b"
model = Gemma3nForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_length=30)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
```
### Expected behavior
I expect the script to run successfully without installing `pillow` and `timm`. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39169/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39169/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39168 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39168/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39168/comments | https://api.github.com/repos/huggingface/transformers/issues/39168/events | https://github.com/huggingface/transformers/issues/39168 | 3,194,887,414 | I_kwDOCUB6oc6-bhz2 | 39,168 | Illegal memory access when using 3d rope | {
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-02T07:26:50 | 2025-07-08T13:35:23 | 2025-07-08T13:35:23 | CONTRIBUTOR | null | null | null | null | ### System Info
Latest transformers `main` branch
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Using models using 3d rope like Qwen2.5-VL or Qwen2.5-Omni, and inference with flash attention 2.
https://github.com/huggingface/transformers/blob/e8e0c76162263840661fc0ca0da3952861754759/src/transformers/modeling_flash_attention_utils.py#L511-L525
Here the code `(torch.diff(position_ids, dim=-1) >= 0).all()` will return `False` when using 3d rope, since the h and w dim are not monotonic.
### Expected behavior
Change `(torch.diff(position_ids, dim=-1) >= 0).all()` to `(torch.diff(position_ids[0] if position_ids.dim() ==3 else position_ids, dim=-1) >= 0).all()`
or add
```python
if position_ids is not None and position_ids.dim() == 3:
position_ids = position_ids[0]
```
before `is_fa2_with_position_ids`, since `position_ids` is also used here:
https://github.com/huggingface/transformers/blob/e8e0c76162263840661fc0ca0da3952861754759/src/transformers/modeling_flash_attention_utils.py#L550-L560
| {
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39168/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39167 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39167/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39167/comments | https://api.github.com/repos/huggingface/transformers/issues/39167/events | https://github.com/huggingface/transformers/issues/39167 | 3,194,865,538 | I_kwDOCUB6oc6-bceC | 39,167 | apply_rotary_pos_emb_flashatt failed during triton jit compilation 'constexpr' object has no attribute 'bit_length' | {
"login": "hebiao064",
"id": 11166516,
"node_id": "MDQ6VXNlcjExMTY2NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/11166516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hebiao064",
"html_url": "https://github.com/hebiao064",
"followers_url": "https://api.github.com/users/hebiao064/followers",
"following_url": "https://api.github.com/users/hebiao064/following{/other_user}",
"gists_url": "https://api.github.com/users/hebiao064/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hebiao064/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hebiao064/subscriptions",
"organizations_url": "https://api.github.com/users/hebiao064/orgs",
"repos_url": "https://api.github.com/users/hebiao064/repos",
"events_url": "https://api.github.com/users/hebiao064/events{/privacy}",
"received_events_url": "https://api.github.com/users/hebiao064/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-02T07:18:13 | 2025-08-11T05:26:36 | 2025-08-11T05:26:36 | NONE | null | null | null | null | ### System Info
Hi,
I am trying to run FSDP Training with `Qwen/Qwen2.5-VL-3B-Instruct`, and I hit the error like this:
```
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
outputs = self.model(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1661, in forward
image_embeds = self.get_image_features(pixel_values, image_grid_thw)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1614, in get_image_features
image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 530, in forward
hidden_states = blk(hidden_states, cu_seqlens=cu_seqlens_now, position_embeddings=position_embeddings)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 856, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 341, in forward
hidden_states = hidden_states + self.attn(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 189, in forward
q, k = apply_rotary_pos_emb_flashatt(q.unsqueeze(0), k.unsqueeze(0), cos, sin)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 156, in apply_rotary_pos_emb_flashatt
q_embed = apply_rotary_emb(q.float(), cos.float(), sin.float()).type_as(q)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 121, in apply_rotary_emb
return ApplyRotaryEmb.apply(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/flash_attn/layers/rotary.py", line 51, in forward
out = apply_rotary(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/flash_attn/ops/triton/rotary.py", line 159, in apply_rotary
torch.library.wrap_triton(rotary_kernel)[grid](
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 1812, in __call__
return tracing_triton_hopifier_singleton.call_triton_kernel(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 1670, in call_triton_kernel
return self.call_HOP(variable, grids, combined_args_raw, tx)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 1766, in call_HOP
return triton_kernel_wrapper_mutation(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 783, in __call__
return super().__call__(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_ops.py", line 471, in __call__
return wrapper()
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_ops.py", line 467, in wrapper
return self.dispatch(
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_ops.py", line 455, in dispatch
return kernel(*args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/_higher_order_ops/triton_kernel_wrap.py", line 886, in triton_kernel_wrapper_mutation_dense
kernel[grid_fn](*args, **kwargs, **constant_args)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 347, in <lambda>
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/triton/runtime/jit.py", line 569, in run
kernel = self.compile(src, target=target, options=options.__dict__)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/triton/compiler/compiler.py", line 278, in compile
module = src.make_ir(options, codegen_fns, module_map, context)
File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/triton/compiler/compiler.py", line 81, in make_ir
return ast_to_ttir(self.fn, self, context=context, options=options, codegen_fns=codegen_fns,
triton.compiler.errors.CompilationError: at 32:28:
# Meta-parameters
# We want ROTARY_DIM to be constexpr, otherwise the triton compiler doesn't know that
# the mask is constant every 8 elements, and it will generate LDG.16 instead of LDG.128
ROTARY_DIM: tl.constexpr,
IS_SEQLEN_OFFSETS_TENSOR: tl.constexpr,
IS_VARLEN: tl.constexpr,
INTERLEAVED: tl.constexpr,
CONJUGATE: tl.constexpr,
BLOCK_H: tl.constexpr,
BLOCK_M: tl.constexpr,
):
BLOCK_K: tl.constexpr = triton.next_power_of_2(ROTARY_DIM)
^
AttributeError("'constexpr' object has no attribute 'bit_length'")
```
This is my env:
```
transformers 4.52.3
flash-attn 2.8.0.post2
flashinfer-python 0.2.6.post1
torch 2.7.1
```
Wonder if you have any insight about this weird compilation issue?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
.
### Expected behavior
Model forward should work without issues | {
"login": "hebiao064",
"id": 11166516,
"node_id": "MDQ6VXNlcjExMTY2NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/11166516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hebiao064",
"html_url": "https://github.com/hebiao064",
"followers_url": "https://api.github.com/users/hebiao064/followers",
"following_url": "https://api.github.com/users/hebiao064/following{/other_user}",
"gists_url": "https://api.github.com/users/hebiao064/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hebiao064/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hebiao064/subscriptions",
"organizations_url": "https://api.github.com/users/hebiao064/orgs",
"repos_url": "https://api.github.com/users/hebiao064/repos",
"events_url": "https://api.github.com/users/hebiao064/events{/privacy}",
"received_events_url": "https://api.github.com/users/hebiao064/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39167/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39166 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39166/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39166/comments | https://api.github.com/repos/huggingface/transformers/issues/39166/events | https://github.com/huggingface/transformers/pull/39166 | 3,194,800,453 | PR_kwDOCUB6oc6dBv9l | 39,166 | [bugfix] fix flash attention 2 unavailable error on Ascend NPU | {
"login": "FightingZhen",
"id": 26176607,
"node_id": "MDQ6VXNlcjI2MTc2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/26176607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FightingZhen",
"html_url": "https://github.com/FightingZhen",
"followers_url": "https://api.github.com/users/FightingZhen/followers",
"following_url": "https://api.github.com/users/FightingZhen/following{/other_user}",
"gists_url": "https://api.github.com/users/FightingZhen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FightingZhen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FightingZhen/subscriptions",
"organizations_url": "https://api.github.com/users/FightingZhen/orgs",
"repos_url": "https://api.github.com/users/FightingZhen/repos",
"events_url": "https://api.github.com/users/FightingZhen/events{/privacy}",
"received_events_url": "https://api.github.com/users/FightingZhen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-02T06:53:14 | 2025-08-14T01:52:17 | 2025-07-07T13:03:39 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39166",
"html_url": "https://github.com/huggingface/transformers/pull/39166",
"diff_url": "https://github.com/huggingface/transformers/pull/39166.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39166.patch",
"merged_at": "2025-07-07T13:03:39"
} | # What does this PR do?
https://github.com/huggingface/transformers/pull/38972 introduce flash attention 3 into `transformers`. However, the modification **introduce a bug** when using flash attention 2 on Ascend NPU.
The core reason is due to function names **mismatch**:
Functions defined from `transformers.integrations.npu_flash_attention`:
https://github.com/huggingface/transformers/blob/e8e0c76162263840661fc0ca0da3952861754759/src/transformers/modeling_flash_attention_utils.py#L140-L153
Functions actually used:
https://github.com/huggingface/transformers/blob/e8e0c76162263840661fc0ca0da3952861754759/src/transformers/modeling_flash_attention_utils.py#L470-L475
This PR is committed for solving this problem, by renaming flash attention 2 related functions (e.g. `npu_flash_attn_func`) from `transformers.integrations.npu_flash_attention` to correct names, which should contain `_2_` symbol (e.g. `flash_attn_2_func`)
Fixes # (issue)
Not related.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39166/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39165 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39165/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39165/comments | https://api.github.com/repos/huggingface/transformers/issues/39165/events | https://github.com/huggingface/transformers/pull/39165 | 3,194,790,614 | PR_kwDOCUB6oc6dBt3G | 39,165 | Update expected values (after switching to A10) - part 2 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T06:49:05 | 2025-07-02T20:47:57 | 2025-07-02T20:47:55 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39165",
"html_url": "https://github.com/huggingface/transformers/pull/39165",
"diff_url": "https://github.com/huggingface/transformers/pull/39165.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39165.patch",
"merged_at": "2025-07-02T20:47:55"
} | # What does this PR do?
As discussed offline, will merge to move fast as it's only expected outputs updated for A10 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39165/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39164 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39164/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39164/comments | https://api.github.com/repos/huggingface/transformers/issues/39164/events | https://github.com/huggingface/transformers/pull/39164 | 3,194,682,780 | PR_kwDOCUB6oc6dBWwP | 39,164 | enable static cache on TP model | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T05:58:46 | 2025-07-09T21:14:45 | 2025-07-09T21:14:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39164",
"html_url": "https://github.com/huggingface/transformers/pull/39164",
"diff_url": "https://github.com/huggingface/transformers/pull/39164.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39164.patch",
"merged_at": "2025-07-09T21:14:45"
} | The TP model usually needs torch.compile to get speed-up, but the static cache failed on running TP model. This PR will check the tp size and then allocate the correct cache shape.
Reproduce error:
`torchrun --standalone --nproc-per-node 2 tp_hf.py`
```python
import torch
import torch.distributed as dist
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "meta-llama/Llama-3.1-8B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, tp_plan="auto")
if dist.is_initialized():
print("Backend:", dist.get_backend())
# Prepare input tokens
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=10, cache_implementation="static")
```
error log:
```
[rank0]: File "/workspace/jiqing/transformers/src/transformers/cache_utils.py", line 1197, in update
[rank0]: return _static_cache_update(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/workspace/jiqing/transformers/src/transformers/cache_utils.py", line 54, in _static_cache_update
[rank0]: k_cache.index_copy_(2, cache_position, key_states)
[rank0]: RuntimeError: index_copy_(): Source/destination tensor must have same slice shapes. Destination slice shape: 1 8 128 at dimen
sion 2 and source slice shape: 1 4 128 at dimension 0.
```
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39164/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39163 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39163/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39163/comments | https://api.github.com/repos/huggingface/transformers/issues/39163/events | https://github.com/huggingface/transformers/pull/39163 | 3,194,438,287 | PR_kwDOCUB6oc6dAi7z | 39,163 | fix default value of config to match checkpionts in LLaVa-OV models | {
"login": "ved1beta",
"id": 146507396,
"node_id": "U_kgDOCLuGhA",
"avatar_url": "https://avatars.githubusercontent.com/u/146507396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ved1beta",
"html_url": "https://github.com/ved1beta",
"followers_url": "https://api.github.com/users/ved1beta/followers",
"following_url": "https://api.github.com/users/ved1beta/following{/other_user}",
"gists_url": "https://api.github.com/users/ved1beta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ved1beta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ved1beta/subscriptions",
"organizations_url": "https://api.github.com/users/ved1beta/orgs",
"repos_url": "https://api.github.com/users/ved1beta/repos",
"events_url": "https://api.github.com/users/ved1beta/events{/privacy}",
"received_events_url": "https://api.github.com/users/ved1beta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-02T03:58:38 | 2025-07-02T09:46:24 | 2025-07-02T09:45:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39163",
"html_url": "https://github.com/huggingface/transformers/pull/39163",
"diff_url": "https://github.com/huggingface/transformers/pull/39163.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39163.patch",
"merged_at": "2025-07-02T09:45:51"
} | # What does this PR do?
fix default value of config to match checkpionts in LLaVa-OV models
Fixes #39089
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@zucchini-nlp
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39163/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39162 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39162/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39162/comments | https://api.github.com/repos/huggingface/transformers/issues/39162/events | https://github.com/huggingface/transformers/issues/39162 | 3,193,990,120 | I_kwDOCUB6oc6-YGvo | 39,162 | Not capable of exporting Mistral to ONNX format with the use of caching | {
"login": "EricJi150",
"id": 73372943,
"node_id": "MDQ6VXNlcjczMzcyOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/73372943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricJi150",
"html_url": "https://github.com/EricJi150",
"followers_url": "https://api.github.com/users/EricJi150/followers",
"following_url": "https://api.github.com/users/EricJi150/following{/other_user}",
"gists_url": "https://api.github.com/users/EricJi150/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EricJi150/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricJi150/subscriptions",
"organizations_url": "https://api.github.com/users/EricJi150/orgs",
"repos_url": "https://api.github.com/users/EricJi150/repos",
"events_url": "https://api.github.com/users/EricJi150/events{/privacy}",
"received_events_url": "https://api.github.com/users/EricJi150/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T22:32:07 | 2025-08-09T08:03:02 | 2025-08-09T08:03:02 | NONE | null | null | null | null | Hello,
I've been attempting to export the HuggingFace [Mistral](https://huggingface.co/docs/transformers/model_doc/mistral?_sm_vck=fW0WBVNM3T7q2MFfHHmDslJVQjVBV4Vqsf6WkfjSqfR2VH0W6Rf3) model to ONNX format with caching on, but have been unsuccessful. After export, when I go to the config file, the `use_cache` variable is set to True. However, during inference, I still get an error stating that the model does not support caching.
This is the error:
> ValueError: `use_cache=True` was passed to the model but the loaded model only supports `use_cache=False`. Please load your current model with `use_cache=False` or export the original model once again with `use_cache=True` when calling the `from_pretrained` method. To re-export your model, simply set `export=True` in the `from_pretrained` method.
Here is my code:
```
import time
from optimum.exporters.onnx import main_export, model_configs, onnx_export_from_model
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import LlavaForConditionalGeneration, AutoTokenizer, pipeline, AutoModelForCausalLM
def export(output_path='onnx_language_model'):
model = AutoModelForCausalLM.from_pretrained('mistralai/Mistral-7B-Instruct-v0.3', trust_remote_code=True)
custom_onnx_configs = {
"model": model_configs.MistralOnnxConfig(
config=model.config,
task="text-generation",
)
}
onnx_export_from_model(
model=model,
task="text-generation",
output=output_path,
opset=17,
custom_onnx_configs=custom_onnx_configs
)
def predict_onnx(text, model_path='onnx_language_model'):
model = ORTModelForCausalLM.from_pretrained(model_path, provider='CUDAExecutionProvider')
tokenizer = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-Instruct-v0.3')
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
result = pipe(text, max_new_tokens=100)
print(result)
``` | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39162/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39161 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39161/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39161/comments | https://api.github.com/repos/huggingface/transformers/issues/39161/events | https://github.com/huggingface/transformers/pull/39161 | 3,193,798,657 | PR_kwDOCUB6oc6c-hE- | 39,161 | fix `llama` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T20:58:02 | 2025-07-01T21:27:24 | 2025-07-01T21:27:23 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39161",
"html_url": "https://github.com/huggingface/transformers/pull/39161",
"diff_url": "https://github.com/huggingface/transformers/pull/39161.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39161.patch",
"merged_at": "2025-07-01T21:27:23"
} | # What does this PR do?
Update expected values of `test_llama_3_1_hard` on `A10`.
Also need
```python
# TODO: check why we have the following strange situation.
# without running in subprocess, this test causes subsequent tests failing with `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!`
@run_test_using_subprocess
@slow
def test_model_7b_dola_generation(self):
```
which is never run before 2025/06/24 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39161/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39160 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39160/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39160/comments | https://api.github.com/repos/huggingface/transformers/issues/39160/events | https://github.com/huggingface/transformers/pull/39160 | 3,193,643,039 | PR_kwDOCUB6oc6c-AQq | 39,160 | Add activation sparsity reference in gemma3n doc | {
"login": "ChongYou",
"id": 31258352,
"node_id": "MDQ6VXNlcjMxMjU4MzUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31258352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChongYou",
"html_url": "https://github.com/ChongYou",
"followers_url": "https://api.github.com/users/ChongYou/followers",
"following_url": "https://api.github.com/users/ChongYou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChongYou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChongYou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChongYou/subscriptions",
"organizations_url": "https://api.github.com/users/ChongYou/orgs",
"repos_url": "https://api.github.com/users/ChongYou/repos",
"events_url": "https://api.github.com/users/ChongYou/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChongYou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T19:54:26 | 2025-07-02T02:11:03 | 2025-07-02T02:11:03 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39160",
"html_url": "https://github.com/huggingface/transformers/pull/39160",
"diff_url": "https://github.com/huggingface/transformers/pull/39160.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39160.patch",
"merged_at": "2025-07-02T02:11:03"
} | Updating Gemma 3n docs to add a reference to paper on activation sparsity. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39160/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39159 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39159/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39159/comments | https://api.github.com/repos/huggingface/transformers/issues/39159/events | https://github.com/huggingface/transformers/issues/39159 | 3,193,477,264 | I_kwDOCUB6oc6-WJiQ | 39,159 | [CI ENERGY Waste] The exist jobs in `Doctests` that has never completed successfully | {
"login": "souhailaS",
"id": 24392261,
"node_id": "MDQ6VXNlcjI0MzkyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/24392261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/souhailaS",
"html_url": "https://github.com/souhailaS",
"followers_url": "https://api.github.com/users/souhailaS/followers",
"following_url": "https://api.github.com/users/souhailaS/following{/other_user}",
"gists_url": "https://api.github.com/users/souhailaS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/souhailaS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/souhailaS/subscriptions",
"organizations_url": "https://api.github.com/users/souhailaS/orgs",
"repos_url": "https://api.github.com/users/souhailaS/repos",
"events_url": "https://api.github.com/users/souhailaS/events{/privacy}",
"received_events_url": "https://api.github.com/users/souhailaS/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T18:43:52 | 2025-07-09T17:37:31 | 2025-07-09T17:37:31 | NONE | null | null | null | null |
# Problem
`Doctests` workflow (https://github.com/huggingface/transformers/actions/runs/15988670157) runs a total of 479 jobs resulting in average of 22h in each run (https://github.com/huggingface/transformers/actions/runs/15988670157/usage). 337 out of the total run jobs are constantly failing. Would you consider removing the jobs with 100% failure rate or adding adequate pre-check to avoid running these jobs ?
# Additional Context
We are a team of researchers from University of Zurich (https://www.ifi.uzh.ch/en/zest.html) currently working on automating energy optimizations in GitHub Actions workflows. This workflow has been detected by our heuristics because of it high resource overload and failure rate.
souhaila.serbout@uzh.ch
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39159/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39158 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39158/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39158/comments | https://api.github.com/repos/huggingface/transformers/issues/39158/events | https://github.com/huggingface/transformers/pull/39158 | 3,193,453,809 | PR_kwDOCUB6oc6c9XHR | 39,158 | Refactor `PretrainedConfig.__init__` method to make it more explicit | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T18:33:19 | 2025-07-08T13:24:40 | 2025-07-08T13:24:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39158",
"html_url": "https://github.com/huggingface/transformers/pull/39158",
"diff_url": "https://github.com/huggingface/transformers/pull/39158.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39158.patch",
"merged_at": "2025-07-08T13:24:40"
} | # What does this PR do?
Make the `__init__` method more explicit to highlight common arguments while creating a specific model config, so it's easier to understand if the argument should be passed to `super().__init__` or created in inherited config
cc @Cyrilvallez wdyt?
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39158/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39157 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39157/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39157/comments | https://api.github.com/repos/huggingface/transformers/issues/39157/events | https://github.com/huggingface/transformers/pull/39157 | 3,193,414,959 | PR_kwDOCUB6oc6c9Olq | 39,157 | Update expected values (after switching to A10) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T18:17:51 | 2025-07-01T18:54:33 | 2025-07-01T18:54:32 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39157",
"html_url": "https://github.com/huggingface/transformers/pull/39157",
"diff_url": "https://github.com/huggingface/transformers/pull/39157.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39157.patch",
"merged_at": "2025-07-01T18:54:32"
} | # What does this PR do?
would merge as discussed offline | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39157/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39156 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39156/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39156/comments | https://api.github.com/repos/huggingface/transformers/issues/39156/events | https://github.com/huggingface/transformers/pull/39156 | 3,193,116,745 | PR_kwDOCUB6oc6c8Nag | 39,156 | Add torchcodec in docstrings/tests for `datasets` 4.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T16:30:53 | 2025-07-08T15:06:14 | 2025-07-08T15:06:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39156",
"html_url": "https://github.com/huggingface/transformers/pull/39156",
"diff_url": "https://github.com/huggingface/transformers/pull/39156.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39156.patch",
"merged_at": "2025-07-08T15:06:12"
} | continuation of https://github.com/huggingface/transformers/pull/39060 (so this PR also contains the object detection fix so we can run the CI with `datasets` on `main`)
it also adds support for torchcodec.decoders.AudioDecoder as input to audio classification and asr pipelines | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39156/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39156/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39155 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39155/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39155/comments | https://api.github.com/repos/huggingface/transformers/issues/39155/events | https://github.com/huggingface/transformers/pull/39155 | 3,193,099,310 | PR_kwDOCUB6oc6c8Jlg | 39,155 | Responses API in `transformers serve` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T16:24:02 | 2025-07-16T12:45:38 | 2025-07-16T12:16:16 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39155",
"html_url": "https://github.com/huggingface/transformers/pull/39155",
"diff_url": "https://github.com/huggingface/transformers/pull/39155.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39155.patch",
"merged_at": "2025-07-16T12:16:16"
} | This PR offers an initial scaffolding of `transformers serve` so that it has both `/v1/chat/completions` and `/v1/responses` endpoints.
So far, it only implements a streaming version of both endpoints.
To implement the responses endpoint, I'm basing myself off of the "Streaming" tab in the OpenAPI API ref for Responses available here: https://platform.openai.com/docs/api-reference/responses/create
I validate that the implementation works as expected by using the `openai` package directly:
```py
import sys
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="<KEY>")
response = client.responses.create(
model="Qwen/Qwen2.5-0.5B-Instruct", instructions="You are a helpful assistant.", input="Hello!", stream=True
)
for event in response:
print(event)
``` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39155/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39154 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39154/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39154/comments | https://api.github.com/repos/huggingface/transformers/issues/39154/events | https://github.com/huggingface/transformers/pull/39154 | 3,192,990,800 | PR_kwDOCUB6oc6c7yGk | 39,154 | fix(modeling_utils): Correctly call _init_weights in smart_apply | {
"login": "Flink-ddd",
"id": 180720690,
"node_id": "U_kgDOCsWUMg",
"avatar_url": "https://avatars.githubusercontent.com/u/180720690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flink-ddd",
"html_url": "https://github.com/Flink-ddd",
"followers_url": "https://api.github.com/users/Flink-ddd/followers",
"following_url": "https://api.github.com/users/Flink-ddd/following{/other_user}",
"gists_url": "https://api.github.com/users/Flink-ddd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flink-ddd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flink-ddd/subscriptions",
"organizations_url": "https://api.github.com/users/Flink-ddd/orgs",
"repos_url": "https://api.github.com/users/Flink-ddd/repos",
"events_url": "https://api.github.com/users/Flink-ddd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flink-ddd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T15:47:16 | 2025-07-02T09:18:50 | 2025-07-02T02:59:11 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39154",
"html_url": "https://github.com/huggingface/transformers/pull/39154",
"diff_url": "https://github.com/huggingface/transformers/pull/39154.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39154.patch",
"merged_at": null
} | ## Description
This PR fixes issue #39124.
The `smart_apply` helper function in `modeling_utils.py` is used to recursively initialize weights in a `PreTrainedModel`. The current implementation checks if a submodule has a custom `_init_weights` method. However, if it finds one, it incorrectly attempts to pass `module._initialize_weights` in the subsequent recursive call.
This leads to an `AttributeError` for any module that defines a custom `_init_weights` method but not an `_initialize_weights` method (e.g., the `Resampler` module used in certain large models, or other custom architectures).
The error reported is: `AttributeError: 'Resampler' object has no attribute '_initialize_weights'`
## Solution
The fix is to align the function being passed in the recursive call with the condition being checked. I've changed the recursive call to use `module._init_weights`, ensuring that the intended custom initialization function is correctly dispatched.
related test screenshot:
<img width="791" alt="Screenshot 2025-07-01 at 23 44 58" src="https://github.com/user-attachments/assets/a7df27ee-520d-490b-9d9d-acfc07e45d52" />
| {
"login": "Flink-ddd",
"id": 180720690,
"node_id": "U_kgDOCsWUMg",
"avatar_url": "https://avatars.githubusercontent.com/u/180720690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Flink-ddd",
"html_url": "https://github.com/Flink-ddd",
"followers_url": "https://api.github.com/users/Flink-ddd/followers",
"following_url": "https://api.github.com/users/Flink-ddd/following{/other_user}",
"gists_url": "https://api.github.com/users/Flink-ddd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Flink-ddd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Flink-ddd/subscriptions",
"organizations_url": "https://api.github.com/users/Flink-ddd/orgs",
"repos_url": "https://api.github.com/users/Flink-ddd/repos",
"events_url": "https://api.github.com/users/Flink-ddd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Flink-ddd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39154/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39153 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39153/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39153/comments | https://api.github.com/repos/huggingface/transformers/issues/39153/events | https://github.com/huggingface/transformers/pull/39153 | 3,192,936,891 | PR_kwDOCUB6oc6c7mi7 | 39,153 | Fix missing fsdp & trainer jobs in daily CI | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T15:29:31 | 2025-07-01T16:10:33 | 2025-07-01T16:10:31 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39153",
"html_url": "https://github.com/huggingface/transformers/pull/39153",
"diff_url": "https://github.com/huggingface/transformers/pull/39153.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39153.patch",
"merged_at": "2025-07-01T16:10:31"
} | # What does this PR do?
It's missing, as one can see in
https://huggingface.slack.com/archives/C06SZSGL2AF/p1750393112155819
It's caused by
Switch to use A10 progressively (#38936)
where we add an extra variable `runner_map` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39153/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39152 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39152/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39152/comments | https://api.github.com/repos/huggingface/transformers/issues/39152/events | https://github.com/huggingface/transformers/pull/39152 | 3,192,743,779 | PR_kwDOCUB6oc6c69D0 | 39,152 | when delaying optimizer creation only prepare the model | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-01T14:32:07 | 2025-07-03T15:46:07 | 2025-07-03T07:04:16 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39152",
"html_url": "https://github.com/huggingface/transformers/pull/39152",
"diff_url": "https://github.com/huggingface/transformers/pull/39152.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39152.patch",
"merged_at": "2025-07-03T07:04:16"
} | # What does this PR do?
Axolotl's CI caught a regression when we tried to upgrade to latest transformers. https://github.com/axolotl-ai-cloud/axolotl/actions/runs/15962262932/job/45016550543
PR #36132 introduced a regression breaking FSDP w llama
```
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
stderr: [rank0]: inputs_embeds = self.embed_tokens(input_ids)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
stderr: [rank0]: return self._call_impl(*args, **kwargs)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
stderr: [rank0]: return forward_call(*args, **kwargs)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 190, in forward
stderr: [rank0]: return F.embedding(
stderr: [rank0]: ^^^^^^^^^^^^ stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/functional.py", line 2551, in embedding
stderr: [rank0]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
```
and FSDP+DPO+qwen
```
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
stderr: [rank0]: inputs_embeds = self.embed_tokens(input_ids)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
stderr: [rank0]: return self._call_impl(*args, **kwargs)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
stderr: [rank0]: return forward_call(*args, **kwargs)
stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/modules/sparse.py", line 190, in forward
stderr: [rank0]: return F.embedding(
stderr: [rank0]: ^^^^^^^^^^^^ stderr: [rank0]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/nn/functional.py", line 2551, in embedding
stderr: [rank0]: return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) stderr: [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stderr: [rank0]: RuntimeError: Output 0 of ViewBackward0 is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39152/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39151 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39151/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39151/comments | https://api.github.com/repos/huggingface/transformers/issues/39151/events | https://github.com/huggingface/transformers/issues/39151 | 3,192,595,554 | I_kwDOCUB6oc6-SyRi | 39,151 | `LayoutLMv3TokenizerFast` doesn't pass all the params. | {
"login": "sergiopaniego",
"id": 17179696,
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergiopaniego",
"html_url": "https://github.com/sergiopaniego",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-01T13:54:03 | 2025-07-01T15:25:15 | 2025-07-01T15:25:15 | MEMBER | null | null | null | null | ### System Info
- `transformers` version: 4.54.0.dev0
- Platform: Linux-6.1.140-154.222.amzn2023.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.17
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla T4
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
This issue was reported via Discord [here](https://discord.com/channels/879548962464493619/1019883044724822016/threads/1389535268494901379).
The code to reproduce is the following:
```python
processor = LayoutLMv3Processor.from_pretrained("microsoft/layoutlmv3-base")
def preprocess(example):
encoding = processor(
image,
example["words"],
is_split_into_words=True,
boxes=normalized_bboxes,
word_labels=[label2id[l] for l in example["labels"]],
truncation=True,
padding="max_length",
return_tensors="pt"
)
tokenized_dataset = dataset.map(preprocess, remove_columns=dataset.column_names)
```
### Expected behavior
The previous code instantiates the fast tokenizer and generates the following error:
```bash
TypeError: LayoutLMv3TokenizerFast._batch_encode_plus() got an unexpected keyword argument 'is_split_into_words'
```
If using `use_fast=False` when creating, the issue doesn't appear. Looking into the code, it seems like there is a missing `**kwargs,` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py#L516) when using the fast tokenizer. The problem is also present in `tokenization_layoutlmv2_fast`.
I've already managed to solve it and will create the PR shortly :) | {
"login": "sergiopaniego",
"id": 17179696,
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sergiopaniego",
"html_url": "https://github.com/sergiopaniego",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39151/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39150 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39150/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39150/comments | https://api.github.com/repos/huggingface/transformers/issues/39150/events | https://github.com/huggingface/transformers/pull/39150 | 3,192,504,226 | PR_kwDOCUB6oc6c6JGi | 39,150 | Efficient Expert Weight Fusion for Moe deepseek v3 | {
"login": "VassilyLombard",
"id": 214468381,
"node_id": "U_kgDODMiHHQ",
"avatar_url": "https://avatars.githubusercontent.com/u/214468381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VassilyLombard",
"html_url": "https://github.com/VassilyLombard",
"followers_url": "https://api.github.com/users/VassilyLombard/followers",
"following_url": "https://api.github.com/users/VassilyLombard/following{/other_user}",
"gists_url": "https://api.github.com/users/VassilyLombard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VassilyLombard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VassilyLombard/subscriptions",
"organizations_url": "https://api.github.com/users/VassilyLombard/orgs",
"repos_url": "https://api.github.com/users/VassilyLombard/repos",
"events_url": "https://api.github.com/users/VassilyLombard/events{/privacy}",
"received_events_url": "https://api.github.com/users/VassilyLombard/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-01T13:29:14 | 2025-07-09T20:59:54 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39150",
"html_url": "https://github.com/huggingface/transformers/pull/39150",
"diff_url": "https://github.com/huggingface/transformers/pull/39150.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39150.patch",
"merged_at": null
} | # What does this PR do?
This PR answers a call for contribution. It introduces a fully vectorized, efficient Mixture-of-Experts (MoE) implementation for the DeepseekV3 model, specifically in the class DeepseekV3Moe. The new approach eliminates the loop over experts, improving inference and training speed, especially for large numbers of experts (256 here!).
Key Changes
* Efficient MoE Forward Pass:
Instead of iterating over experts, all expert weights are stacked into tensors, and expert selection is performed using indexing and batched matrix multiplications.
* Expert Routing:
For each token, the top-k expert indices and routing weights are computed by the router. Inputs are repeated and routed to the selected experts in a single batched operation.
* Batched Expert Computation:
All expert computations (linear projections and activations) are performed in parallel for all tokens and their assigned experts.
* Aggregation:
The outputs from all routed experts are weighted and summed back to the original token positions, producing the final MoE output without explicit loop.
* Compatibility:
The implementation assumes all experts are instances of the same class and have the same architecture, enabling parameter stacking and efficient computation.
No additional dependencies are required. @ArthurZucker
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39150/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39149 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39149/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39149/comments | https://api.github.com/repos/huggingface/transformers/issues/39149/events | https://github.com/huggingface/transformers/pull/39149 | 3,192,299,308 | PR_kwDOCUB6oc6c5cvB | 39,149 | Fix continuous batching in `transformers serve` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T12:30:13 | 2025-07-03T16:15:33 | 2025-07-03T16:15:31 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39149",
"html_url": "https://github.com/huggingface/transformers/pull/39149",
"diff_url": "https://github.com/huggingface/transformers/pull/39149.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39149.patch",
"merged_at": "2025-07-03T16:15:31"
} | null | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39149/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39148 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39148/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39148/comments | https://api.github.com/repos/huggingface/transformers/issues/39148/events | https://github.com/huggingface/transformers/pull/39148 | 3,192,187,832 | PR_kwDOCUB6oc6c5ECW | 39,148 | [masking] fix Aggressive boolean conversion breaking packing implementations | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T11:59:59 | 2025-07-07T12:58:40 | 2025-07-07T12:58:39 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39148",
"html_url": "https://github.com/huggingface/transformers/pull/39148",
"diff_url": "https://github.com/huggingface/transformers/pull/39148.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39148.patch",
"merged_at": null
} | # What does this PR do?
Fix Aggressive boolean conversion in the masking_utils that is breaking packing implementation
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39148/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39147 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39147/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39147/comments | https://api.github.com/repos/huggingface/transformers/issues/39147/events | https://github.com/huggingface/transformers/pull/39147 | 3,192,109,559 | PR_kwDOCUB6oc6c4yud | 39,147 | [smolvlm] fix video inference | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-01T11:36:55 | 2025-07-02T10:05:10 | 2025-07-02T10:05:10 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39147",
"html_url": "https://github.com/huggingface/transformers/pull/39147",
"diff_url": "https://github.com/huggingface/transformers/pull/39147.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39147.patch",
"merged_at": "2025-07-02T10:05:10"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/39006. The model actually had default values for sampling and thus the flag has to be set to `True` for BC
Added a small tests, we had no video tests and thus the bug went unnoticed | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39147/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39146 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39146/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39146/comments | https://api.github.com/repos/huggingface/transformers/issues/39146/events | https://github.com/huggingface/transformers/pull/39146 | 3,191,914,151 | PR_kwDOCUB6oc6c4HOj | 39,146 | fix: remove undefined variable | {
"login": "ybkurt",
"id": 92328721,
"node_id": "U_kgDOBYDTEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92328721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ybkurt",
"html_url": "https://github.com/ybkurt",
"followers_url": "https://api.github.com/users/ybkurt/followers",
"following_url": "https://api.github.com/users/ybkurt/following{/other_user}",
"gists_url": "https://api.github.com/users/ybkurt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ybkurt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybkurt/subscriptions",
"organizations_url": "https://api.github.com/users/ybkurt/orgs",
"repos_url": "https://api.github.com/users/ybkurt/repos",
"events_url": "https://api.github.com/users/ybkurt/events{/privacy}",
"received_events_url": "https://api.github.com/users/ybkurt/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T10:46:52 | 2025-07-01T17:10:29 | 2025-07-01T17:10:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39146",
"html_url": "https://github.com/huggingface/transformers/pull/39146",
"diff_url": "https://github.com/huggingface/transformers/pull/39146.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39146.patch",
"merged_at": "2025-07-01T17:10:29"
} | `MusicgenSinusoidalPositionalEmbedding` seems like a modified copy of `Speech2TextSinusoidalPositionalEmbedding`, however, in `__init__` method, `self.offset` is not defined for Musicgen. This removes the undefined `self.offset` variable from `MusicgenSinusoidalPositionalEmbedding` implementation.
@eustlb
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39146/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39145 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39145/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39145/comments | https://api.github.com/repos/huggingface/transformers/issues/39145/events | https://github.com/huggingface/transformers/pull/39145 | 3,191,811,976 | PR_kwDOCUB6oc6c3wuK | 39,145 | RotaryEmbeddings change `is not None` -> `isinstance(..., dict)` | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T10:22:23 | 2025-07-07T13:05:28 | 2025-07-07T13:05:28 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39145",
"html_url": "https://github.com/huggingface/transformers/pull/39145",
"diff_url": "https://github.com/huggingface/transformers/pull/39145.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39145.patch",
"merged_at": "2025-07-07T13:05:28"
} | # What does this PR do?
Small quality improvement of if/else statement for RotaryEmbeddings, motivated by static typechecking
The behaviour has slightly changed in case `rope_scaling` has an unexpected type:
| **Condition** | **Before (Previous Behavior)** | **After (Updated Behavior)** |
|-------------------------------------------|-----------------------------------------------------|----------------------------------------|
| `rope_scaling` is a `dict` | `rope_type` retrieved from dict |`rope_type` retrieved from dict |
| `rope_scaling` is `None` | `rope_type = "default"` | `rope_type = "default"` |
| `rope_scaling` is an unexpected type | Raises `AttributeError: '<type>'`<br>`object has no attribute 'get'` | `rope_type = "default"` |
cc @gante
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39145/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39145/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39144 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39144/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39144/comments | https://api.github.com/repos/huggingface/transformers/issues/39144/events | https://github.com/huggingface/transformers/pull/39144 | 3,191,557,407 | PR_kwDOCUB6oc6c25ZP | 39,144 | fix bug when using gptq model on xpu device | {
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T09:18:24 | 2025-07-04T01:27:54 | 2025-07-04T01:27:54 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39144",
"html_url": "https://github.com/huggingface/transformers/pull/39144",
"diff_url": "https://github.com/huggingface/transformers/pull/39144.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39144.patch",
"merged_at": null
} | @SunMarc @MekkCyber, pls help review. When we use sample code on XPU like:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "hugging-quants/Meta-Llama-3.1-8B-Instruct-GPTQ-INT4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
)
prompt = [
{"role": "system", "content": "You are a helpful assistant, that responds as a pirate."},
{"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("xpu")
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
```
it will crash and return error:
```
Traceback (most recent call last):
File "/root/HuggingFace/third_party/peft/examples/sft/test.py", line 24, in <module>
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/generation/utils.py", line 2597, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/generation/utils.py", line 3557, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/modeling_llama.py", line 688, in forward
outputs: BaseModelOutputWithPast = self.model(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/modeling_llama.py", line 422, in forward
inputs_embeds = self.embed_tokens(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/sparse.py", line 190, in forward
return F.embedding(
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/functional.py", line 2551, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and xpu:0! (when checking argument for argument index in method wrapper_XPU__index_select)
```
it should be similar with cuda. If we do not set `device_map` in `AutoModelForCausalLM.from_pretrained` API, the gptq quantized model will be loaded to CPU by default.
| {
"login": "kaixuanliu",
"id": 13268042,
"node_id": "MDQ6VXNlcjEzMjY4MDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13268042?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaixuanliu",
"html_url": "https://github.com/kaixuanliu",
"followers_url": "https://api.github.com/users/kaixuanliu/followers",
"following_url": "https://api.github.com/users/kaixuanliu/following{/other_user}",
"gists_url": "https://api.github.com/users/kaixuanliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaixuanliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaixuanliu/subscriptions",
"organizations_url": "https://api.github.com/users/kaixuanliu/orgs",
"repos_url": "https://api.github.com/users/kaixuanliu/repos",
"events_url": "https://api.github.com/users/kaixuanliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaixuanliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39144/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39143 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39143/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39143/comments | https://api.github.com/repos/huggingface/transformers/issues/39143/events | https://github.com/huggingface/transformers/issues/39143 | 3,191,520,050 | I_kwDOCUB6oc6-Orsy | 39,143 | OpenTelemetry Collector Connection error when installing the latest release 4.53.0 during `docker build` | {
"login": "ancalita",
"id": 27920906,
"node_id": "MDQ6VXNlcjI3OTIwOTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/27920906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ancalita",
"html_url": "https://github.com/ancalita",
"followers_url": "https://api.github.com/users/ancalita/followers",
"following_url": "https://api.github.com/users/ancalita/following{/other_user}",
"gists_url": "https://api.github.com/users/ancalita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ancalita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ancalita/subscriptions",
"organizations_url": "https://api.github.com/users/ancalita/orgs",
"repos_url": "https://api.github.com/users/ancalita/repos",
"events_url": "https://api.github.com/users/ancalita/events{/privacy}",
"received_events_url": "https://api.github.com/users/ancalita/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-01T09:09:54 | 2025-07-16T09:54:34 | 2025-07-16T09:54:34 | NONE | null | null | null | null | ### System Info
**System Info**:
```
- `transformers` version: 4.53.0
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.10.16
- Huggingface_hub version: 0.33.1
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
```
**Issue Description**:
I have an indirect dependency on `transformers` via `gliner` package which I use to download and save a HF model in a custom Dockerfile for my project. When building the image and running the step that runs the below script, I get a ton of ConnectionErrors. When I downgrade `transformers` to `4.52.4`, I don't see these ConnectionErrors anymore.
Example script:
```
from gliner import GLiNER
import os
def download_model(model_path: str, model_name: str) -> None:
"""Download a Gliner model to the specified directory."""
# Check if the directory already exists
if not os.path.exists(model_path):
# Create the directory
os.makedirs(model_path)
model = GLiNER.from_pretrained(model_name)
model.save_pretrained(model_path)
if __name__ == "__main__":
download_model(
model_path="/app/models/gliner/",
model_name="urchade/gliner_multi_pii-v1"
)
```
Errors:
```
#9 [5/7] RUN python3 -m gliner_model_download_script
#9 3.744 Exception while exporting metrics HTTPConnectionPool(host='localhost', port=4318): Max retries exceeded with url: /v1/metrics (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff29492020>: Failed to establish a new connection: [Errno 111] Connection refused'))
#9 3.744 Traceback (most recent call last):
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 198, in _new_conn
#9 3.744 sock = connection.create_connection(
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
#9 3.744 raise err
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
#9 3.744 sock.connect(sa)
#9 3.744 ConnectionRefusedError: [Errno 111] Connection refused
#9 3.744
#9 3.744 The above exception was the direct cause of the following exception:
#9 3.744
#9 3.744 Traceback (most recent call last):
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
#9 3.744 response = self._make_request(
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 493, in _make_request
#9 3.744 conn.request(
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 445, in request
#9 3.744 self.endheaders()
#9 3.744 File "/usr/lib/python3.10/http/client.py", line 1278, in endheaders
#9 3.744 self._send_output(message_body, encode_chunked=encode_chunked)
#9 3.744 File "/usr/lib/python3.10/http/client.py", line 1038, in _send_output
#9 3.744 self.send(msg)
#9 3.744 File "/usr/lib/python3.10/http/client.py", line 976, in send
#9 3.744 self.connect()
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 276, in connect
#9 3.744 self.sock = self._new_conn()
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connection.py", line 213, in _new_conn
#9 3.744 raise NewConnectionError(
#9 3.744 urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0xffff29492020>: Failed to establish a new connection: [Errno 111] Connection refused
#9 3.744
#9 3.744 The above exception was the direct cause of the following exception:
#9 3.744
#9 3.744 Traceback (most recent call last):
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/requests/adapters.py", line 667, in send
#9 3.744 resp = conn.urlopen(
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 841, in urlopen
#9 3.744 retries = retries.increment(
#9 3.744 File "/opt/venv/lib/python3.10/site-packages/urllib3/util/retry.py", line 519, in increment
#9 3.744 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
```
### Who can help?
@richardliaw
@amogkam
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Create a custom dockerfile that installs latest gliner then runs the script shared above, example:
```
...
RUN python3 -m venv /opt/venv && \
. /opt/venv/bin/activate && \
pip install --no-cache-dir -U "pip==24.*" && \
pip install --no-cache-dir "gliner==0.2.21"
# Download HF model
COPY ./models /app/models
COPY ./gliner_model_download_script.py gliner_model_download_script.py
RUN python3 -m gliner_model_download_script
RUN ls -l /app/models/gliner
```
Then run `docker build .`
### Expected behavior
No connection errors raised. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39143/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39142 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39142/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39142/comments | https://api.github.com/repos/huggingface/transformers/issues/39142/events | https://github.com/huggingface/transformers/pull/39142 | 3,191,330,838 | PR_kwDOCUB6oc6c2HIs | 39,142 | Move get_mask_sizes from Cache to masking_utils and remove use of get_seq_length. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-01T08:22:53 | 2025-07-02T15:18:06 | 2025-07-02T15:18:06 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39142",
"html_url": "https://github.com/huggingface/transformers/pull/39142",
"diff_url": "https://github.com/huggingface/transformers/pull/39142.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39142.patch",
"merged_at": null
} | This PR depends on #39106
Look at the last commit, f09e0cd1c0d61fca0098487e9dcd9dfe409b364b:
I think having the get_mask_sizes out of cache makes much more sense. There is only one extra change:
https://github.com/huggingface/transformers/blob/6b6314d5f8304a1f547b04a00a4bd06a11f51681/src/transformers/masking_utils.py#L643
It substitutes `past_seen_tokens=past_key_values.get_seq_length()` (which depends on cache info that might be hard to cumpute, i.e., QuantizedCaches). What we would like to compute is
```py
past_seen_tokens = cache_position[-1]
```
but that is not compatible with `torch.export`.
The new solution is `torch.export` friendly and works both when `cache_position = torch.tensor([ 0, 1, 2, 3, 4, 5, 6])` (prefill phase) and when `cache_position = torch.tensor([16])`. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39142/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39141 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39141/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39141/comments | https://api.github.com/repos/huggingface/transformers/issues/39141/events | https://github.com/huggingface/transformers/issues/39141 | 3,190,560,775 | I_kwDOCUB6oc6-LBgH | 39,141 | VLLM depoly Qwen2.5_omni server error | {
"login": "WenmuZhou",
"id": 12406017,
"node_id": "MDQ6VXNlcjEyNDA2MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/12406017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenmuZhou",
"html_url": "https://github.com/WenmuZhou",
"followers_url": "https://api.github.com/users/WenmuZhou/followers",
"following_url": "https://api.github.com/users/WenmuZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/WenmuZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenmuZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenmuZhou/subscriptions",
"organizations_url": "https://api.github.com/users/WenmuZhou/orgs",
"repos_url": "https://api.github.com/users/WenmuZhou/repos",
"events_url": "https://api.github.com/users/WenmuZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenmuZhou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-01T03:34:35 | 2025-07-02T04:08:38 | 2025-07-02T04:08:38 | NONE | null | null | null | null | ### System Info
```bash
INFO 07-01 03:29:45 [__init__.py:244] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.4 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not collect
CMake version : version 3.30.2
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.7.1+cu126
Is debug build : False
CUDA used to build PyTorch : 12.6
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-5.4.0-169-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : 12.6.20
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration : GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version : 535.216.03
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.3.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.3.0
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 2450.0000
CPU min MHz: 1500.0000
BogoMIPS: 4890.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-31
NUMA node1 CPU(s): 32-63
NUMA node2 CPU(s): 64-95
NUMA node3 CPU(s): 96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] flash_attn==2.8.0.post2
[pip3] flake8==7.1.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.5.2
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-cufile-cu12==1.11.1.6
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-dali-cuda120==1.40.0
[pip3] nvidia-ml-py==12.575.51
[pip3] nvidia-ml-py3==7.352.0
[pip3] nvidia-modelopt==0.15.0
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvimgcodec-cu12==0.3.0.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvidia-pyindex==1.0.9
[pip3] nvidia-smi==0.1.3
[pip3] onnx==1.16.1
[pip3] onnxruntime-gpu==1.17.1
[pip3] onnxsim==0.4.36
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.0
[pip3] pynvml==12.0.0
[pip3] pytorch-lightning==2.2.4
[pip3] pytorch-triton==3.0.0+dedb7bdf3
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==4.1.0
[pip3] torch==2.7.1
[pip3] torchaudio==2.7.0
[pip3] torchmetrics==1.4.0.post0
[pip3] torchpack==0.3.1
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.1
[pip3] transformers==4.52.4
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==3.3.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.9.1
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 NIC0 NIC1 NIC2 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS SYS 64-95 2 N/A
NIC0 SYS X SYS SYS
NIC1 SYS SYS X SYS
NIC2 SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
==============================
Environment Variables
==============================
NVIDIA_VISIBLE_DEVICES=GPU-84d7af4f-bb6d-9c62-0358-bcf0488cbbe5
CUBLAS_VERSION=12.6.0.22
NVIDIA_REQUIRE_CUDA=cuda>=9.0
CUDA_CACHE_DISABLE=1
TORCH_CUDA_ARCH_LIST=5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX
NCCL_VERSION=2.22.3
NVIDIA_DRIVER_CAPABILITIES=video,compute,utility,graphics
NVIDIA_PRODUCT_NAME=PyTorch
CUDA_VERSION=12.6.0.022
PYTORCH_VERSION=2.5.0a0+872d972
PYTORCH_BUILD_NUMBER=0
CUDNN_FRONTEND_VERSION=1.5.2
CUDNN_VERSION=9.3.0.75
PYTORCH_HOME=/opt/pytorch/pytorch
LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/torch/lib:/usr/local/lib/python3.10/dist-packages/torch_tensorrt/lib:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NVIDIA_BUILD_ID=107063150
CUDA_DRIVER_VERSION=560.35.03
PYTORCH_BUILD_VERSION=2.5.0a0+872d972
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NVIDIA_REQUIRE_JETPACK_HOST_MOUNTS=
NVIDIA_PYTORCH_VERSION=24.08
TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
But when I use the following code to deploy the vllm servic
```bash
VLLM_USE_V1=0 \
vllm serve "Qwen/Qwen2.5-Omni-3B" \
--port "8080" \
--dtype bfloat16 \
--allowed-local-media-path / \
--served-model-name "Qwen2.5-Omni-3B" \
--limit-mm-per-prompt "image=12"
```
transformers==4.53.0 with https://github.com/huggingface/transformers/pull/39125
reported this error
```log
ERROR 07-01 03:20:42 [engine.py:458] cu_seqlens_q must have shape (batch_size + 1)
ERROR 07-01 03:20:42 [engine.py:458] Traceback (most recent call last):
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 446, in run_mp_engine
ERROR 07-01 03:20:42 [engine.py:458] engine = MQLLMEngine.from_vllm_config(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 133, in from_vllm_config
ERROR 07-01 03:20:42 [engine.py:458] return cls(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 87, in __init__
ERROR 07-01 03:20:42 [engine.py:458] self.engine = LLMEngine(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 268, in __init__
ERROR 07-01 03:20:42 [engine.py:458] self._initialize_kv_caches()
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 413, in _initialize_kv_caches
ERROR 07-01 03:20:42 [engine.py:458] self.model_executor.determine_num_available_blocks())
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 104, in determine_num_available_blocks
ERROR 07-01 03:20:42 [engine.py:458] results = self.collective_rpc("determine_num_available_blocks")
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
ERROR 07-01 03:20:42 [engine.py:458] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/utils.py", line 2671, in run_method
ERROR 07-01 03:20:42 [engine.py:458] return func(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 07-01 03:20:42 [engine.py:458] return func(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/worker.py", line 256, in determine_num_available_blocks
ERROR 07-01 03:20:42 [engine.py:458] self.model_runner.profile_run()
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 07-01 03:20:42 [engine.py:458] return func(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1300, in profile_run
ERROR 07-01 03:20:42 [engine.py:458] self._dummy_run(max_num_batched_tokens, max_num_seqs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1426, in _dummy_run
ERROR 07-01 03:20:42 [engine.py:458] self.execute_model(model_input, kv_caches, intermediate_tensors)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 07-01 03:20:42 [engine.py:458] return func(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1844, in execute_model
ERROR 07-01 03:20:42 [engine.py:458] hidden_or_intermediate_states = model_executable(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 07-01 03:20:42 [engine.py:458] return self._call_impl(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 07-01 03:20:42 [engine.py:458] return forward_call(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 875, in forward
ERROR 07-01 03:20:42 [engine.py:458] multimodal_embeddings = self.get_multimodal_embeddings_v0(**kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 831, in get_multimodal_embeddings_v0
ERROR 07-01 03:20:42 [engine.py:458] audio_embeds = self._process_audio_input(audio_input)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 652, in _process_audio_input
ERROR 07-01 03:20:42 [engine.py:458] audio_outputs = self.audio_tower(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 07-01 03:20:42 [engine.py:458] return self._call_impl(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 07-01 03:20:42 [engine.py:458] return forward_call(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 838, in forward
ERROR 07-01 03:20:42 [engine.py:458] layer_outputs = encoder_layer(hidden_states, cu_seqlens, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/modeling_layers.py", line 83, in __call__
ERROR 07-01 03:20:42 [engine.py:458] return super().__call__(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 07-01 03:20:42 [engine.py:458] return self._call_impl(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 07-01 03:20:42 [engine.py:458] return forward_call(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 704, in forward
ERROR 07-01 03:20:42 [engine.py:458] hidden_states = self.self_attn(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 07-01 03:20:42 [engine.py:458] return self._call_impl(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 07-01 03:20:42 [engine.py:458] return forward_call(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py", line 650, in forward
ERROR 07-01 03:20:42 [engine.py:458] attn_output, _ = attention_interface(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/integrations/flash_attention.py", line 65, in flash_attention_forward
ERROR 07-01 03:20:42 [engine.py:458] attn_output = _flash_attention_forward(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/transformers/modeling_flash_attention_utils.py", line 520, in _flash_attention_forward
ERROR 07-01 03:20:42 [engine.py:458] attn_output_unpad = _flash_attn_varlen_func(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 1443, in flash_attn_varlen_func
ERROR 07-01 03:20:42 [engine.py:458] return FlashAttnVarlenFunc.apply(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 575, in apply
ERROR 07-01 03:20:42 [engine.py:458] return super().apply(*args, **kwargs) # type: ignore[misc]
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 925, in forward
ERROR 07-01 03:20:42 [engine.py:458] out_padded, softmax_lse, S_dmask, rng_state = _wrapped_flash_attn_varlen_forward(
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
ERROR 07-01 03:20:42 [engine.py:458] return self._op(*args, **(kwargs or {}))
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 335, in backend_impl
ERROR 07-01 03:20:42 [engine.py:458] result = self._backend_fns[device_type](*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/_compile.py", line 51, in inner
ERROR 07-01 03:20:42 [engine.py:458] return disable_fn(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
ERROR 07-01 03:20:42 [engine.py:458] return fn(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/torch/_library/custom_ops.py", line 367, in wrapped_fn
ERROR 07-01 03:20:42 [engine.py:458] return fn(*args, **kwargs)
ERROR 07-01 03:20:42 [engine.py:458] File "/home/jun.zhou10/.local/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 165, in _flash_attn_varlen_forward
ERROR 07-01 03:20:42 [engine.py:458] out, softmax_lse, S_dmask, rng_state = flash_attn_gpu.varlen_fwd(
ERROR 07-01 03:20:42 [engine.py:458] RuntimeError: cu_seqlens_q must have shape (batch_size + 1)
```
full error log in [log.txt](https://github.com/user-attachments/files/20990247/log.txt)
It runs normally under 4.52.4
### Expected behavior
start server | {
"login": "WenmuZhou",
"id": 12406017,
"node_id": "MDQ6VXNlcjEyNDA2MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/12406017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenmuZhou",
"html_url": "https://github.com/WenmuZhou",
"followers_url": "https://api.github.com/users/WenmuZhou/followers",
"following_url": "https://api.github.com/users/WenmuZhou/following{/other_user}",
"gists_url": "https://api.github.com/users/WenmuZhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenmuZhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenmuZhou/subscriptions",
"organizations_url": "https://api.github.com/users/WenmuZhou/orgs",
"repos_url": "https://api.github.com/users/WenmuZhou/repos",
"events_url": "https://api.github.com/users/WenmuZhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenmuZhou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39141/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39140 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39140/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39140/comments | https://api.github.com/repos/huggingface/transformers/issues/39140/events | https://github.com/huggingface/transformers/pull/39140 | 3,190,229,857 | PR_kwDOCUB6oc6cyc18 | 39,140 | feat(trainer): emergency checkpointing on crashes & SIGTERM/SIGINT | {
"login": "AyushSharma173",
"id": 41262335,
"node_id": "MDQ6VXNlcjQxMjYyMzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/41262335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AyushSharma173",
"html_url": "https://github.com/AyushSharma173",
"followers_url": "https://api.github.com/users/AyushSharma173/followers",
"following_url": "https://api.github.com/users/AyushSharma173/following{/other_user}",
"gists_url": "https://api.github.com/users/AyushSharma173/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AyushSharma173/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AyushSharma173/subscriptions",
"organizations_url": "https://api.github.com/users/AyushSharma173/orgs",
"repos_url": "https://api.github.com/users/AyushSharma173/repos",
"events_url": "https://api.github.com/users/AyushSharma173/events{/privacy}",
"received_events_url": "https://api.github.com/users/AyushSharma173/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-01T00:11:11 | 2025-10-06T08:15:53 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39140",
"html_url": "https://github.com/huggingface/transformers/pull/39140",
"diff_url": "https://github.com/huggingface/transformers/pull/39140.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39140.patch",
"merged_at": null
} | # What does this PR do?
Adds *failure-safe training* to the 🤗 `Trainer`.
* **New flag** `enable_emergency_checkpoint` in `TrainingArguments`
* default: `False` (opt-in)
* **Automatic emergency save** when training ends unexpectedly
* un-handled exception inside `Trainer.train`
* external `SIGTERM`/`SIGINT` (e.g. job pre-emption, Ctrl-C)
* Checkpoint is written to
`<output_dir>/checkpoint-emergency`
and contains model weights, optimizer, scheduler, scaler, RNG state
plus a minimal `trainer_state.json`.
* Resume works transparently via
`TrainingArguments(..., resume_from_checkpoint=".../checkpoint-emergency")`.
### Motivation
Issue #38961 requests a robust way to avoid losing progress when a run is killed mid-epoch or by OOM/infra errors.
Current work-arounds (very frequent `save_steps` or a user-level `try/except`) are slow or brittle.
This PR makes the feature one-line (`enable_emergency_checkpoint=True`) and cost-free when disabled.
### Implementation details
* **`training_args.py`** – new dataclass field `enable_emergency_checkpoint`.
* **`trainer.py`**
* registers signal & `atexit` handlers when the flag is on
* wraps `train()` loop in `try/except` and calls `_common_emergency_save`
* `_common_emergency_save` is idempotent & rank-safe (`_emergency_save_running / _completed` flags).
* **Tests**
* `tests/trainer/test_emergency_ckpt.py`
* verifies flag round-trip
* asserts emergency folder is created on crash and training can resume
* checks that opting-out leaves no folder
Fixes # (issue)
Closes #38961.
### Before submitting
- [x] I read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request).
- [x] The change is discussed in #38961.
- [x] Added unit tests.
- [x] Ran `make fixup && make quality` locally.
- [ ] Documentation not updated (trainer argument + short snippet) – **TODO**
## Who can review?
Trainer reviewers: @zach-huggingface, @SunMarc, @qgallouedec | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39140/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39139 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39139/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39139/comments | https://api.github.com/repos/huggingface/transformers/issues/39139/events | https://github.com/huggingface/transformers/issues/39139 | 3,190,019,092 | I_kwDOCUB6oc6-I9QU | 39,139 | Add x-transformers library by lucidrains | {
"login": "asigalov61",
"id": 56325539,
"node_id": "MDQ6VXNlcjU2MzI1NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/56325539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asigalov61",
"html_url": "https://github.com/asigalov61",
"followers_url": "https://api.github.com/users/asigalov61/followers",
"following_url": "https://api.github.com/users/asigalov61/following{/other_user}",
"gists_url": "https://api.github.com/users/asigalov61/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asigalov61/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asigalov61/subscriptions",
"organizations_url": "https://api.github.com/users/asigalov61/orgs",
"repos_url": "https://api.github.com/users/asigalov61/repos",
"events_url": "https://api.github.com/users/asigalov61/events{/privacy}",
"received_events_url": "https://api.github.com/users/asigalov61/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-06-30T22:19:38 | 2025-07-02T11:29:42 | null | NONE | null | null | null | null | ### Feature request
Hey guys!
I would like to sincerely ask you to consider adding x-transformers library to Hugging Face transformers.
https://github.com/lucidrains/x-transformers
This library is very popular and versatile and its being used in many projects!
I am actually very surprised that this library was overlooked.
### Motivation
I am a heavy Hugging Face PRO user with many models and implementations that use x-transformers library so it would be very handy and cool to be able to have it integrated in Hugging Face so that the users can easily use mine and other peoples' models which use this library.
### Your contribution
I can help with whatever is needed to help integrate this library. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39139/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39139/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39138 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39138/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39138/comments | https://api.github.com/repos/huggingface/transformers/issues/39138/events | https://github.com/huggingface/transformers/pull/39138 | 3,189,773,528 | PR_kwDOCUB6oc6cw45f | 39,138 | Updated the Model docs - for the MARIAN model | {
"login": "emanrissha",
"id": 213320948,
"node_id": "U_kgDODLcE9A",
"avatar_url": "https://avatars.githubusercontent.com/u/213320948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emanrissha",
"html_url": "https://github.com/emanrissha",
"followers_url": "https://api.github.com/users/emanrissha/followers",
"following_url": "https://api.github.com/users/emanrissha/following{/other_user}",
"gists_url": "https://api.github.com/users/emanrissha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emanrissha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emanrissha/subscriptions",
"organizations_url": "https://api.github.com/users/emanrissha/orgs",
"repos_url": "https://api.github.com/users/emanrissha/repos",
"events_url": "https://api.github.com/users/emanrissha/events{/privacy}",
"received_events_url": "https://api.github.com/users/emanrissha/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-30T20:32:00 | 2025-07-09T17:23:03 | 2025-07-09T17:23:03 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39138",
"html_url": "https://github.com/huggingface/transformers/pull/39138",
"diff_url": "https://github.com/huggingface/transformers/pull/39138.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39138.patch",
"merged_at": "2025-07-09T17:23:03"
} | This update improves the Marian model card to follow the Hugging Face standardized model card format. The changes include:
- Added a clear description of MarianMT, its architecture, and how it differs from other models.
- Provided usage examples for Pipeline and AutoModel.
- Added a quantization example for optimizing model inference.
- Included instructions and examples for multilingual translation with language codes.
- Added an Attention Mask Visualizer example.
- Added a Resources section with relevant links to papers, the Marian framework, language codes, tokenizer guides, and quantization documentation.
- Fixed formatting issues in the code blocks for correct rendering.
This update improves the readability, usability, and consistency of the Marian model documentation for users.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39138/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39137 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39137/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39137/comments | https://api.github.com/repos/huggingface/transformers/issues/39137/events | https://github.com/huggingface/transformers/issues/39137 | 3,189,521,323 | I_kwDOCUB6oc6-HDur | 39,137 | ImportError: cannot import name 'pipeline' from 'transformers' | {
"login": "atabari-bci",
"id": 93156142,
"node_id": "U_kgDOBY1zLg",
"avatar_url": "https://avatars.githubusercontent.com/u/93156142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atabari-bci",
"html_url": "https://github.com/atabari-bci",
"followers_url": "https://api.github.com/users/atabari-bci/followers",
"following_url": "https://api.github.com/users/atabari-bci/following{/other_user}",
"gists_url": "https://api.github.com/users/atabari-bci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atabari-bci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atabari-bci/subscriptions",
"organizations_url": "https://api.github.com/users/atabari-bci/orgs",
"repos_url": "https://api.github.com/users/atabari-bci/repos",
"events_url": "https://api.github.com/users/atabari-bci/events{/privacy}",
"received_events_url": "https://api.github.com/users/atabari-bci/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
},
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-30T18:49:54 | 2025-10-23T00:53:19 | 2025-08-12T11:59:18 | NONE | null | null | null | null | ### System Info
I am using Databricks notebook.
Databricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
### Who can help?
@Rocketknight1 @SunMarc @zach-huggingface
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Here is the code:
```
%pip install --upgrade torch transformers accelerate deepspeed bitsandbytes huggingface_hub
dbutils.library.restartPython()
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
```
Error:
`ImportError: cannot import name 'pipeline' from 'transformers' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-a13cd5c4-d035-4d04-87bd-75088348617d/lib/python3.10/site-packages/transformers/__init__.py)`
Python: 3.10.12
installed packages:
transformers== 4.53.0
huggingface_hub==0.33.1
torch==2.7.1+cu126
accelerate==1.8.1
deepspeed==0.17.1
bitsandbytes==0.46.0
These are all up-to-date versions for all of these packages. What is the problem?
### Expected behavior
Import without error. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39137/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39136 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39136/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39136/comments | https://api.github.com/repos/huggingface/transformers/issues/39136/events | https://github.com/huggingface/transformers/issues/39136 | 3,189,450,071 | I_kwDOCUB6oc6-GyVX | 39,136 | bf16_full_eval=True moves model to device before FSDP application and causes cuda OOM | {
"login": "jlu-figma",
"id": 150076608,
"node_id": "U_kgDOCPH8wA",
"avatar_url": "https://avatars.githubusercontent.com/u/150076608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlu-figma",
"html_url": "https://github.com/jlu-figma",
"followers_url": "https://api.github.com/users/jlu-figma/followers",
"following_url": "https://api.github.com/users/jlu-figma/following{/other_user}",
"gists_url": "https://api.github.com/users/jlu-figma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlu-figma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlu-figma/subscriptions",
"organizations_url": "https://api.github.com/users/jlu-figma/orgs",
"repos_url": "https://api.github.com/users/jlu-figma/repos",
"events_url": "https://api.github.com/users/jlu-figma/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlu-figma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 4101623725,
"node_id": "LA_kwDOCUB6oc70ec-t",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch%20FSDP",
"name": "PyTorch FSDP",
"color": "B60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-30T18:23:20 | 2025-09-11T08:30:14 | 2025-09-11T08:30:14 | NONE | null | null | null | null | ### System Info
Hi! Just sharing an interaction I found between accelerate's FSDP v2 code and `bf16_full_eval=True` in trainer.
1. Setting `bf16_full_eval=True` causes us to move the model to device over [here](https://github.com/huggingface/transformers/blob/b922b22ec2e458978dbd89038ad4b47885b34195/src/transformers/trainer.py#L2137-L2146).
2. During FSDP preparation, `accelerate` creates empty tensors for all of the optimizer states over [here](https://github.com/huggingface/accelerate/blob/a16d2bb3c1c2ac8029842c8baf2d03388baf09c7/src/accelerate/accelerator.py#L1493-L1500)
3. When the model is very large, e.g. a Qwen 32B model on an A100-80GB instance, this will cause an oom.
The solution I found for the time being is to simply set `bf16_full_eval=False`, but it would be great to support `bf16_full_eval` without oom.
### Who can help?
_No response_
### Reproduction
1. Load Qwen 2.5-32B on 8xA100-80GB instance.
2. Set `bf16_full_eval=True` in your training args (I also set `bf16=False` because FSDP does an internal upcast to float32 when that is set to true)
3. Before creating the `Trainer`, set the environment variable `FSDP_VERSION` to `"2"`. This will trigger `accelerate` to use FSDP v2 instead of FSDP v1.
### Expected behavior
It should at least make it to the training step before OOMing out. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39136/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.