url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/38732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38732/comments | https://api.github.com/repos/huggingface/transformers/issues/38732/events | https://github.com/huggingface/transformers/pull/38732 | 3,133,823,288 | PR_kwDOCUB6oc6Z356e | 38,732 | [llava] fix integration tests with Siglip | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T14:28:26 | 2025-06-11T08:09:16 | 2025-06-11T08:09:16 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38732",
"html_url": "https://github.com/huggingface/transformers/pull/38732",
"diff_url": "https://github.com/huggingface/transformers/pull/38732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38732.patch",
"merged_at": "2025-06-11T08:09:16"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/38722#issuecomment-2958809929
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38732/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38731/comments | https://api.github.com/repos/huggingface/transformers/issues/38731/events | https://github.com/huggingface/transformers/pull/38731 | 3,133,661,717 | PR_kwDOCUB6oc6Z3WTy | 38,731 | Update CvT documentation with improved usage examples and additional … | {
"login": "sezan92",
"id": 11025093,
"node_id": "MDQ6VXNlcjExMDI1MDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/11025093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sezan92",
"html_url": "https://github.com/sezan92",
"followers_url": "https://api.github.com/users/sezan92/followers",
"following_url": "https://api.github.com/users/sezan92/following{/other_user}",
"gists_url": "https://api.github.com/users/sezan92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sezan92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sezan92/subscriptions",
"organizations_url": "https://api.github.com/users/sezan92/orgs",
"repos_url": "https://api.github.com/users/sezan92/repos",
"events_url": "https://api.github.com/users/sezan92/events{/privacy}",
"received_events_url": "https://api.github.com/users/sezan92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T13:43:25 | 2025-06-18T01:37:33 | 2025-06-17T17:30:03 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38731",
"html_url": "https://github.com/huggingface/transformers/pull/38731",
"diff_url": "https://github.com/huggingface/transformers/pull/38731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38731.patch",
"merged_at": "2025-06-17T17:30:03"
} | …notes
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38731/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38730/comments | https://api.github.com/repos/huggingface/transformers/issues/38730/events | https://github.com/huggingface/transformers/pull/38730 | 3,133,532,724 | PR_kwDOCUB6oc6Z254E | 38,730 | fix: Add method to get image features in PaliGemmaForConditionalGeneration | {
"login": "YushunXiang",
"id": 73413365,
"node_id": "MDQ6VXNlcjczNDEzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/73413365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YushunXiang",
"html_url": "https://github.com/YushunXiang",
"followers_url": "https://api.github.com/users/YushunXiang/followers",
"following_url": "https://api.github.com/users/YushunXiang/following{/other_user}",
"gists_url": "https://api.github.com/users/YushunXiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YushunXiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YushunXiang/subscriptions",
"organizations_url": "https://api.github.com/users/YushunXiang/orgs",
"repos_url": "https://api.github.com/users/YushunXiang/repos",
"events_url": "https://api.github.com/users/YushunXiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/YushunXiang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T13:06:44 | 2025-06-11T11:35:19 | 2025-06-11T10:26:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38730",
"html_url": "https://github.com/huggingface/transformers/pull/38730",
"diff_url": "https://github.com/huggingface/transformers/pull/38730.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38730.patch",
"merged_at": "2025-06-11T10:26:32"
} | # What does this PR do?
In the [v4.52.1 release](https://github.com/huggingface/transformers/releases/tag/v4.52.1) of the transformers library, PR #37033 @zucchini-nlp introduced a bug by renaming the class PaliGemmaForConditionalGeneration(PaliGemmaPreTrainedModel, GenerationMixin) to class PaliGemmaModel(PaliGemmaPreTrainedModel), which causes the original `get_image_features` function in line 218, [huggingface/lerobot/common/policies/pi0/paligemma_with_expert.py](https://github.com/huggingface/lerobot/blob/main/lerobot/common/policies/pi0/paligemma_with_expert.py) to be unusable.
This pull request adds a new `get_image_features` method across multiple generative model implementations in the `src/transformers/models` directory. The method provides a standardized interface for extracting image features from models, with variations in parameters depending on the specific model's requirements.
I modify 6 files with adding the method `get_image_features` to corresponding class `<model name>ForConditionalGeneration`:
- src/transformers/models/idefics2/modeling_idefics2.py
- src/transformers/models/llava/modeling_llava.py
- src/transformers/models/qwen2_vl/modeling_qwen2_vl.py
- src/transformers/models/chameleon/modeling_chameleon.py
- src/transformers/models/paligemma/modeling_paligemma.py
- src/transformers/models/video_llava/modeling_video_llava.py
and use make fix-copies to generate the other 13 modeling files.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts, @qubvel, @zucchini-nlp
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38730/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38729/comments | https://api.github.com/repos/huggingface/transformers/issues/38729/events | https://github.com/huggingface/transformers/pull/38729 | 3,133,415,101 | PR_kwDOCUB6oc6Z2fvT | 38,729 | Expectation fixes and added AMD expectations | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T12:30:57 | 2025-06-13T14:14:59 | 2025-06-13T14:14:58 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38729",
"html_url": "https://github.com/huggingface/transformers/pull/38729",
"diff_url": "https://github.com/huggingface/transformers/pull/38729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38729.patch",
"merged_at": "2025-06-13T14:14:58"
} | This PR aims to transfer the changes merged in https://github.com/huggingface/transformers/tree/amd-hf-ci-branch
The following PRs were cherry-picked from this branch:
- #38529 , which led to conflicts on `qwen3` and `xglm`
- #38698
- #38697
- #38581
It also fixes some logic when calling `get_device_propreties` because that function was changed i this PR: now we can use `unpack_device_propreties` to get a consistent triplet (cc. @ivarflakstad for future changes) | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38729/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38728/comments | https://api.github.com/repos/huggingface/transformers/issues/38728/events | https://github.com/huggingface/transformers/pull/38728 | 3,133,322,962 | PR_kwDOCUB6oc6Z2LjE | 38,728 | Better typing for num_items_in_batch | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T12:01:27 | 2025-06-11T14:26:43 | 2025-06-11T14:26:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38728",
"html_url": "https://github.com/huggingface/transformers/pull/38728",
"diff_url": "https://github.com/huggingface/transformers/pull/38728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38728.patch",
"merged_at": "2025-06-11T14:26:41"
} | # What does this PR do?
This PR clarifies the type of `num_items_in_batch` + better docstring | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38728/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38727/comments | https://api.github.com/repos/huggingface/transformers/issues/38727/events | https://github.com/huggingface/transformers/issues/38727 | 3,133,267,691 | I_kwDOCUB6oc66wd7r | 38,727 | KV Cache Bug in Iterative generation | {
"login": "Greek-Guardian",
"id": 74443539,
"node_id": "MDQ6VXNlcjc0NDQzNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/74443539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Greek-Guardian",
"html_url": "https://github.com/Greek-Guardian",
"followers_url": "https://api.github.com/users/Greek-Guardian/followers",
"following_url": "https://api.github.com/users/Greek-Guardian/following{/other_user}",
"gists_url": "https://api.github.com/users/Greek-Guardian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Greek-Guardian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Greek-Guardian/subscriptions",
"organizations_url": "https://api.github.com/users/Greek-Guardian/orgs",
"repos_url": "https://api.github.com/users/Greek-Guardian/repos",
"events_url": "https://api.github.com/users/Greek-Guardian/events{/privacy}",
"received_events_url": "https://api.github.com/users/Greek-Guardian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T11:44:11 | 2025-06-12T02:04:43 | 2025-06-12T02:04:43 | NONE | null | null | null | null | ### System Info
According to the code in [here](https://huggingface.co/docs/transformers/kv_cache#iterative-generation), i can use such code for Iterative generation. How ever, this happened:
`---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[4], [line 28](vscode-notebook-cell:?execution_count=4&line=28)
[25](vscode-notebook-cell:?execution_count=4&line=25) # if isinstance(past_key_values, SinkCache):
[26](vscode-notebook-cell:?execution_count=4&line=26) # inputs = {k: v[:, -max_cache_length:] for k, v in inputs.items()}
[27](vscode-notebook-cell:?execution_count=4&line=27) input_length = inputs["input_ids"].shape[1]
---> [28](vscode-notebook-cell:?execution_count=4&line=28) outputs = model.generate(**inputs, do_sample=False, max_new_tokens=256, past_key_values=past_key_values)
[29](vscode-notebook-cell:?execution_count=4&line=29) completion = tokenizer.decode(outputs[0, input_length: ], skip_special_tokens=True)
[30](vscode-notebook-cell:?execution_count=4&line=30) messages.append({"role": "assistant", "content": completion})
File /opt/conda/envs/python3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
[113](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) @functools.wraps(func)
[114](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:114) def decorate_context(*args, **kwargs):
[115](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:115) with ctx_factory():
--> [116](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py:116) return func(*args, **kwargs)
File /opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2597, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, use_model_defaults, custom_generate, **kwargs)
[2589](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2589) input_ids, model_kwargs = self._expand_inputs_for_generation(
[2590](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2590) input_ids=input_ids,
[2591](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2591) expand_size=generation_config.num_return_sequences,
[2592](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2592) is_encoder_decoder=self.config.is_encoder_decoder,
[2593](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2593) **model_kwargs,
[2594](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2594) )
[2596](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2596) # 12. run sample (it degenerates to greedy search when `generation_config.do_sample=False`)
-> [2597](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2597) result = self._sample(
[2598](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2598) input_ids,
[2599](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2599) logits_processor=prepared_logits_processor,
[2600](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2600) stopping_criteria=prepared_stopping_criteria,
[2601](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2601) generation_config=generation_config,
[2602](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2602) synced_gpus=synced_gpus,
[2603](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2603) streamer=streamer,
[2604](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2604) **model_kwargs,
[2605](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2605) )
[2607](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2607) elif generation_mode in (GenerationMode.BEAM_SAMPLE, GenerationMode.BEAM_SEARCH):
[2608](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2608) # 11. interleave input_ids with `num_beams` additional sequences per batch
[2609](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2609) input_ids, model_kwargs = self._expand_inputs_for_generation(
[2610](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2610) input_ids=input_ids,
[2611](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2611) expand_size=generation_config.num_beams,
[2612](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2612) is_encoder_decoder=self.config.is_encoder_decoder,
[2613](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2613) **model_kwargs,
[2614](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:2614) )
File /opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3550, in GenerationMixin._sample(self, input_ids, logits_processor, stopping_criteria, generation_config, synced_gpus, streamer, **model_kwargs)
[3546](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3546) is_prefill = True
[3548](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3548) while self._has_unfinished_sequences(this_peer_finished, synced_gpus, device=input_ids.device):
[3549](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3549) # prepare model inputs
-> [3550](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3550) model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
[3552](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3552) # prepare variable output controls (note: some models won't accept all output controls)
[3553](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:3553) model_inputs.update({"output_attentions": output_attentions} if output_attentions else {})
File /opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:580, in GenerationMixin.prepare_inputs_for_generation(self, input_ids, past_key_values, attention_mask, inputs_embeds, cache_position, **kwargs)
[578](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:578) if past_key_values is not None:
[579](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:579) model_inputs["past_key_values"] = past_key_values
--> [580](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:580) inputs_embeds, input_ids = self._cache_dependant_input_preparation(
[581](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:581) input_ids, inputs_embeds, cache_position
[582](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:582) )
[584](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:584) # 3. Prepare base model inputs
[585](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:585) input_ids_key = "decoder_input_ids" if self.config.is_encoder_decoder else "input_ids"
File /opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:479, in GenerationMixin._cache_dependant_input_preparation(self, input_ids, inputs_embeds, cache_position)
[475](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:475) if inputs_embeds is not None and input_ids.shape[1] == 0: # Exception 4
[476](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:476) inputs_embeds = inputs_embeds[:, -cache_position.shape[0] :]
[477](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:477) elif (
[478](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:478) inputs_embeds is not None # Exception 1
--> [479](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:479) or (cache_position[-1] >= input_ids.shape[1]) # Exception 3
[480](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:480) ):
[481](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:481) input_ids = input_ids[:, -cache_position.shape[0] :]
[482](https://vscode-remote+aop-002dlab-002ealibaba-002dinc-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/python3.10/lib/python3.10/site-packages/transformers/generation/utils.py:482) elif input_ids.shape[1] != cache_position.shape[0]: # Default case (the "else", a no op, is Exception 2)
IndexError: index -1 is out of bounds for dimension 0 with size 0`
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
transformers==4.52.4
the model is Qwen3
### Expected behavior
I expect it to use kv cache as normal | {
"login": "Greek-Guardian",
"id": 74443539,
"node_id": "MDQ6VXNlcjc0NDQzNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/74443539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Greek-Guardian",
"html_url": "https://github.com/Greek-Guardian",
"followers_url": "https://api.github.com/users/Greek-Guardian/followers",
"following_url": "https://api.github.com/users/Greek-Guardian/following{/other_user}",
"gists_url": "https://api.github.com/users/Greek-Guardian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Greek-Guardian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Greek-Guardian/subscriptions",
"organizations_url": "https://api.github.com/users/Greek-Guardian/orgs",
"repos_url": "https://api.github.com/users/Greek-Guardian/repos",
"events_url": "https://api.github.com/users/Greek-Guardian/events{/privacy}",
"received_events_url": "https://api.github.com/users/Greek-Guardian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38727/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38726/comments | https://api.github.com/repos/huggingface/transformers/issues/38726/events | https://github.com/huggingface/transformers/issues/38726 | 3,133,171,216 | I_kwDOCUB6oc66wGYQ | 38,726 | Issue importing models in jupyter notebooks 'No module named transformers.models.ipynb_checkpoints' | {
"login": "mchaudrycupa",
"id": 213815457,
"node_id": "U_kgDODL6QoQ",
"avatar_url": "https://avatars.githubusercontent.com/u/213815457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchaudrycupa",
"html_url": "https://github.com/mchaudrycupa",
"followers_url": "https://api.github.com/users/mchaudrycupa/followers",
"following_url": "https://api.github.com/users/mchaudrycupa/following{/other_user}",
"gists_url": "https://api.github.com/users/mchaudrycupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchaudrycupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchaudrycupa/subscriptions",
"organizations_url": "https://api.github.com/users/mchaudrycupa/orgs",
"repos_url": "https://api.github.com/users/mchaudrycupa/repos",
"events_url": "https://api.github.com/users/mchaudrycupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchaudrycupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T11:10:41 | 2025-07-19T08:02:23 | 2025-07-19T08:02:23 | NONE | null | null | null | null | ### System Info
The following error comes up: ModuleNotFoundError: No module named transformers.models.ipynb_checkpoints'
Packages:
ipykernel==6.29.5
- `transformers` version: 4.52.4
- Platform: Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.39
- Python version: 3.10.16
- Huggingface_hub version: 0.31.4
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Tesla T4
I'm running the code in jupyter labs in a jupyter notebook.
The code I'm running is as follows:
` from transformers import AutoTokenizer, AutoModel, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B') `
Please let me know if there's any other information that'd be useful.
The whole error that comes up is:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)
2044 try:
-> 2045 module = self._get_module(self._class_to_module[name])
2046 value = getattr(module, name)
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)
2074 except Exception as e:
-> 2075 raise e
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)
2072 try:
-> 2073 return importlib.import_module("." + module_name, self.__name__)
2074 except Exception as e:
File /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:992, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1004, in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'transformers.models.ipynb_checkpoints'
The above exception was the direct cause of the following exception:
ModuleNotFoundError Traceback (most recent call last)
Cell In[2], line 2
1 tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
----> 2 model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
3 # ,
4 # quantization_config = bnb_config,
5 # device_map = "auto")
6
7 # We recommend enabling flash_attention_2 for better acceleration and memory saving.
8 # model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-8B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
File [~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:568](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py#line=567), in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
564 return model_class.from_pretrained(
565 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
566 )
567 elif type(config) in cls._model_mapping.keys():
--> 568 model_class = _get_model_class(config, cls._model_mapping)
569 if model_class.config_class == config.sub_configs.get("text_config", None):
570 config = config.get_text_config()
File [~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:388](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py#line=387), in _get_model_class(config, model_mapping)
387 def _get_model_class(config, model_mapping):
--> 388 supported_models = model_mapping[type(config)]
389 if not isinstance(supported_models, (list, tuple)):
390 return supported_models
File [~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:774](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py#line=773), in _LazyAutoMapping.__getitem__(self, key)
772 if model_type in self._model_mapping:
773 model_name = self._model_mapping[model_type]
--> 774 return self._load_attr_from_module(model_type, model_name)
776 # Maybe there was several model types associated with this config.
777 model_types = [k for k, v in self._config_mapping.items() if v == key.__name__]
File [~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:788](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py#line=787), in _LazyAutoMapping._load_attr_from_module(self, model_type, attr)
786 if module_name not in self._modules:
787 self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 788 return getattribute_from_module(self._modules[module_name], attr)
File [~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:700](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py#line=699), in getattribute_from_module(module, attr)
698 if isinstance(attr, tuple):
699 return tuple(getattribute_from_module(module, a) for a in attr)
--> 700 if hasattr(module, attr):
701 return getattr(module, attr)
702 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the
703 # object at the top level.
File [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2048](https://j07z6gj046cgcjqn.ml-3cebfa69-400.cdp-prd.yss1-43mz.a2.cloudera.site/lab/tree/test/generic_notebooks/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2047), in _LazyModule.__getattr__(self, name)
2046 value = getattr(module, name)
2047 except (ModuleNotFoundError, RuntimeError) as e:
-> 2048 raise ModuleNotFoundError(
2049 f"Could not import module '{name}'. Are this object's requirements defined correctly?"
2050 ) from e
2052 elif name in self._modules:
2053 try:
ModuleNotFoundError: Could not import module 'Qwen3Model'. Are this object's requirements defined correctly?
### Who can help?
@ArthurZucker This may be relevant to you, apologies if not.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoTokenizer, AutoModel, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
### Expected behavior
That the model is imported correctly | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38726/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38725/comments | https://api.github.com/repos/huggingface/transformers/issues/38725/events | https://github.com/huggingface/transformers/issues/38725 | 3,133,149,430 | I_kwDOCUB6oc66wBD2 | 38,725 | `MoshiIntegrationTests` started to fail after #34464 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
}
] | open | false | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [] | 2025-06-10T11:03:11 | 2025-10-16T14:20:37 | null | COLLABORATOR | null | null | null | null | `MoshiIntegrationTests` started to fail after #34464.
### Reproduction
For example,
> RUN_SLOW=1 python3 -m pytest -v tests/models/moshi/test_modeling_moshi.py::MoshiIntegrationTests::test_moshika_greedy_unconditional_fp16
Since then, during 8 months, there are several periods where those tests are failing with different errors (not even to run the forward/generate). But when they could run, the output values are consistent with the output values given by #34464. Only before/after #34464, the output values are different.
@gante Could you check if the changed outputs are expected?
Your commit : 8a734ea2
Parent commit: 913330ca
(You will need to run `pip install -e .` when checking out to old commits like these)
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38725/timeline | null | reopened | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/38724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38724/comments | https://api.github.com/repos/huggingface/transformers/issues/38724/events | https://github.com/huggingface/transformers/pull/38724 | 3,133,048,860 | PR_kwDOCUB6oc6Z1ODC | 38,724 | Implement DPTImageProcessor Fast Class | {
"login": "Shuvam-M-Astro",
"id": 96789016,
"node_id": "U_kgDOBcTiGA",
"avatar_url": "https://avatars.githubusercontent.com/u/96789016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shuvam-M-Astro",
"html_url": "https://github.com/Shuvam-M-Astro",
"followers_url": "https://api.github.com/users/Shuvam-M-Astro/followers",
"following_url": "https://api.github.com/users/Shuvam-M-Astro/following{/other_user}",
"gists_url": "https://api.github.com/users/Shuvam-M-Astro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shuvam-M-Astro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shuvam-M-Astro/subscriptions",
"organizations_url": "https://api.github.com/users/Shuvam-M-Astro/orgs",
"repos_url": "https://api.github.com/users/Shuvam-M-Astro/repos",
"events_url": "https://api.github.com/users/Shuvam-M-Astro/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shuvam-M-Astro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T10:35:16 | 2025-06-10T10:35:49 | 2025-06-10T10:35:49 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38724",
"html_url": "https://github.com/huggingface/transformers/pull/38724",
"diff_url": "https://github.com/huggingface/transformers/pull/38724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38724.patch",
"merged_at": null
} | Added DPTImageProcessorFast for efficient batched preprocessing using torch/torchvision
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Shuvam-M-Astro",
"id": 96789016,
"node_id": "U_kgDOBcTiGA",
"avatar_url": "https://avatars.githubusercontent.com/u/96789016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shuvam-M-Astro",
"html_url": "https://github.com/Shuvam-M-Astro",
"followers_url": "https://api.github.com/users/Shuvam-M-Astro/followers",
"following_url": "https://api.github.com/users/Shuvam-M-Astro/following{/other_user}",
"gists_url": "https://api.github.com/users/Shuvam-M-Astro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shuvam-M-Astro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shuvam-M-Astro/subscriptions",
"organizations_url": "https://api.github.com/users/Shuvam-M-Astro/orgs",
"repos_url": "https://api.github.com/users/Shuvam-M-Astro/repos",
"events_url": "https://api.github.com/users/Shuvam-M-Astro/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shuvam-M-Astro/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38724/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38723/comments | https://api.github.com/repos/huggingface/transformers/issues/38723/events | https://github.com/huggingface/transformers/pull/38723 | 3,133,003,209 | PR_kwDOCUB6oc6Z1D7F | 38,723 | Update get device properties and types | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T10:20:34 | 2025-06-10T14:02:28 | 2025-06-10T14:02:26 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38723",
"html_url": "https://github.com/huggingface/transformers/pull/38723",
"diff_url": "https://github.com/huggingface/transformers/pull/38723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38723.patch",
"merged_at": "2025-06-10T14:02:26"
} | By using the more consistent `tuple[Optional[str], Optional[int], Optional[int]]` format @remi-or introduced as the functional type of the class we can simplify a lot of code.
We keep the old `tuple[Optional[str], Union[None, int, tuple[int, int]]]` as the internal type of `Expectations` because it helps us create instances of the class elegantly. | {
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38723/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38722/comments | https://api.github.com/repos/huggingface/transformers/issues/38722/events | https://github.com/huggingface/transformers/pull/38722 | 3,132,950,576 | PR_kwDOCUB6oc6Z03_3 | 38,722 | Fix `llava` tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T10:04:59 | 2025-06-10T11:53:18 | 2025-06-10T11:53:17 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38722",
"html_url": "https://github.com/huggingface/transformers/pull/38722",
"diff_url": "https://github.com/huggingface/transformers/pull/38722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38722.patch",
"merged_at": "2025-06-10T11:53:17"
} | # What does this PR do?
They are failing for long time, some are matching issues and some are OOM issue.
Tests pass on A10 / T4 (torch 2.7.1) except `test_generation_siglip_backbone`
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38722/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38721/comments | https://api.github.com/repos/huggingface/transformers/issues/38721/events | https://github.com/huggingface/transformers/pull/38721 | 3,132,911,625 | PR_kwDOCUB6oc6Z0vKv | 38,721 | [Docs] New DiT model card | {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T09:54:19 | 2025-06-13T08:30:40 | 2025-06-12T17:26:50 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38721",
"html_url": "https://github.com/huggingface/transformers/pull/38721",
"diff_url": "https://github.com/huggingface/transformers/pull/38721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38721.patch",
"merged_at": "2025-06-12T17:26:50"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR updates the model-card for the DiT model, as described in https://github.com/huggingface/transformers/issues/36979, in an attempt to standardize all model-cards.
## Who can review?
[@stevhliu ](https://github.com/stevhliu)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38721/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38720/comments | https://api.github.com/repos/huggingface/transformers/issues/38720/events | https://github.com/huggingface/transformers/issues/38720 | 3,132,901,789 | I_kwDOCUB6oc66vEmd | 38,720 | ModernBERT for Sequence Classification - issues with finetuning | {
"login": "98MM",
"id": 47939788,
"node_id": "MDQ6VXNlcjQ3OTM5Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/47939788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/98MM",
"html_url": "https://github.com/98MM",
"followers_url": "https://api.github.com/users/98MM/followers",
"following_url": "https://api.github.com/users/98MM/following{/other_user}",
"gists_url": "https://api.github.com/users/98MM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/98MM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/98MM/subscriptions",
"organizations_url": "https://api.github.com/users/98MM/orgs",
"repos_url": "https://api.github.com/users/98MM/repos",
"events_url": "https://api.github.com/users/98MM/events{/privacy}",
"received_events_url": "https://api.github.com/users/98MM/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T09:51:18 | 2025-07-16T13:53:00 | 2025-07-16T13:53:00 | NONE | null | null | null | null | ### System Info
transformers version 4.53.0.dev0
flash_attn version 2.7.4.post1 (via pip)
Using offical transformers image from: https://hub.docker.com/r/huggingface/transformers-pytorch-gpu
### Who can help?
@ArthurZucker
@zach-huggingface
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Standard BERT-like finetuning approach for NLI using AutoModelForSequenceCassification:
```python
nli = load_dataset('kiddothe2b/contract-nli', 'contractnli_a') # also tested with subset of sentence-transformer/all-nli
def tokenize_function(examples, tokenizer, task_inputs):
inps = [examples[inp] for inp in task_inputs]
tokenized = tokenizer(*inps, truncation=True)
return tokenized
# tokenize the datasets ...
train_dataset = nli['train'].map(
tokenize_function,
batched=True,
remove_columns=['premise', 'hypothesis'],
fn_kwargs={"tokenizer":tokenizer, "task_inputs":['premise', 'hypothesis']})
tokenizer=AutoTokenizer.from_pretrained('answerdotai/ModernBERT-base')
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding='longest')
# compute metrics fn ...
model = AutoModelForSequenceClassification.from_pretrained(
'answerdotai/ModernBERT-base',
num_labels=3,
classifier_pooling='cls',
torch_dtype='float16',
id2label=id2label,
label2id=label2id,
reference_compile=False # also tested without this setting
).to('cuda') # tested this on L4, A100, A100_80GB, H100
train_bsz, val_bsz = 16, 16
lr = 2e-5
betas = (0.9, 0.98)
n_epochs = 10
eps = 1e-6
wd = 8e-6
training_args = TrainingArguments(
output_dir='/out',
learning_rate=lr,
weight_decay=wd,
per_device_train_batch_size=train_bsz,
per_device_eval_batch_size=val_bsz,
num_train_epochs=n_epochs,
lr_scheduler_type="linear",
optim="adamw_torch",
adam_beta1=betas[0],
adam_beta2=betas[1],
adam_epsilon=eps,
logging_strategy="epoch",
eval_strategy="epoch",
save_strategy="epoch",
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
processing_class=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
```
### Expected behavior
I want to preface this that I first encountered this problem on my own pretrained ModernBERT implementation. After extensive testing on my version I decided to test the official version (answerdotai/ModernBERT-base), where I encountered the same issue.
My base model works well on fill-mask tasks (via pipeline), outputting expected, correct words. After pretraining it achieved an accuracy of 0.86 and perplexity of 2.1 on an eval dataset. These results are why I decided to test the offical implementation as well.
This issue I encountered when attempting to use AutoModelForSequenceClassification / ModernBertForSequenceClassification on both my and the official version.
Issue behaviour:
Training begins and runs. However, the prediction accuracy and F1 scorse indicate random guessing/very unstable training (with three classes macro accuracy and f1 hower around 0.33, while per-class values vary wildly from 0.00 up to even 0.91 per epoch, seemingly randomly).
Looking at the embeddings themselves, they seem fine - no issues with NaN values (as raised [here](https://github.com/huggingface/transformers/issues/35574)). I've also tested masked word prediction (via fill-mask pipeline), which also output correct/expected predictions.
Expected behaviour would be at least stable training, if not similar results as claimed in the original ModernBERT paper. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38720/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38719/comments | https://api.github.com/repos/huggingface/transformers/issues/38719/events | https://github.com/huggingface/transformers/pull/38719 | 3,132,625,488 | PR_kwDOCUB6oc6ZzwOf | 38,719 | Add Fireflies model and tests | {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T08:22:38 | 2025-06-10T15:29:39 | 2025-06-10T14:59:48 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38719",
"html_url": "https://github.com/huggingface/transformers/pull/38719",
"diff_url": "https://github.com/huggingface/transformers/pull/38719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38719.patch",
"merged_at": null
} | What does this PR do?
This PR adds a new encoder-only Transformer model called Fireflies, designed as a lightweight, modular architecture for NLP research. The model was built from scratch using standard PyTorch modules and Hugging Face’s PreTrainedModel interface.
Key Features:
Implements a standard nn.TransformerEncoder as backbone.
Supports AutoConfig and AutoModel integration.
Includes a minimal test for forward pass using FirefliesModel.
Registered in:
configuration_auto.py
modeling_auto.py
Compatible with .from_pretrained() loading from Hugging Face Hub.
🔧 Motivation:
This model was created as a foundation for exploring training lightweight Transformer-based architectures, specifically for Indonesian and English text tasks. Its simple encoder-only design makes it easy to experiment with training, fine-tuning, and conversion to other formats such as ONNX or GGUF.
Fixes: None (new model integration)
Before submitting
I read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)
This PR implements a new model (Fireflies) in transformers.models.fireflies
Added test file: tests/models/fireflies/test_modeling_fireflies.py
Registered in configuration_auto.py, modeling_auto.py
Model passes from_pretrained() loading with config & weights
Who can review?
@ArthurZucker (text models)
@Rocketknight1 (Flax/PyTorch base)
@SunMarc or @zach-huggingface (Trainer/test infrastructure)
Let me know if I should add documentation pages (model_doc/fireflies) or integrate with AutoModelForCausalLM, MaskedLM, etc. in the future.
Thanks in advance for your review! | {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38719/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 2,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38719/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38718/comments | https://api.github.com/repos/huggingface/transformers/issues/38718/events | https://github.com/huggingface/transformers/pull/38718 | 3,132,422,451 | PR_kwDOCUB6oc6ZzD4Y | 38,718 | Fix: undefined reference to gelu in `ClippedGELUActivation` | {
"login": "nil0x9",
"id": 185366217,
"node_id": "U_kgDOCwx2yQ",
"avatar_url": "https://avatars.githubusercontent.com/u/185366217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nil0x9",
"html_url": "https://github.com/nil0x9",
"followers_url": "https://api.github.com/users/nil0x9/followers",
"following_url": "https://api.github.com/users/nil0x9/following{/other_user}",
"gists_url": "https://api.github.com/users/nil0x9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nil0x9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nil0x9/subscriptions",
"organizations_url": "https://api.github.com/users/nil0x9/orgs",
"repos_url": "https://api.github.com/users/nil0x9/repos",
"events_url": "https://api.github.com/users/nil0x9/events{/privacy}",
"received_events_url": "https://api.github.com/users/nil0x9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T07:11:09 | 2025-06-10T12:49:14 | 2025-06-10T12:49:14 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38718",
"html_url": "https://github.com/huggingface/transformers/pull/38718",
"diff_url": "https://github.com/huggingface/transformers/pull/38718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38718.patch",
"merged_at": null
} | # Summary of This PR
- [x] Fixes a typo in `activations.py` that uses an undefined symbol `gelu`. | {
"login": "nil0x9",
"id": 185366217,
"node_id": "U_kgDOCwx2yQ",
"avatar_url": "https://avatars.githubusercontent.com/u/185366217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nil0x9",
"html_url": "https://github.com/nil0x9",
"followers_url": "https://api.github.com/users/nil0x9/followers",
"following_url": "https://api.github.com/users/nil0x9/following{/other_user}",
"gists_url": "https://api.github.com/users/nil0x9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nil0x9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nil0x9/subscriptions",
"organizations_url": "https://api.github.com/users/nil0x9/orgs",
"repos_url": "https://api.github.com/users/nil0x9/repos",
"events_url": "https://api.github.com/users/nil0x9/events{/privacy}",
"received_events_url": "https://api.github.com/users/nil0x9/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38718/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38717/comments | https://api.github.com/repos/huggingface/transformers/issues/38717/events | https://github.com/huggingface/transformers/issues/38717 | 3,132,402,518 | I_kwDOCUB6oc66tKtW | 38,717 | `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`. | {
"login": "falali009",
"id": 15902245,
"node_id": "MDQ6VXNlcjE1OTAyMjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/15902245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/falali009",
"html_url": "https://github.com/falali009",
"followers_url": "https://api.github.com/users/falali009/followers",
"following_url": "https://api.github.com/users/falali009/following{/other_user}",
"gists_url": "https://api.github.com/users/falali009/gists{/gist_id}",
"starred_url": "https://api.github.com/users/falali009/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/falali009/subscriptions",
"organizations_url": "https://api.github.com/users/falali009/orgs",
"repos_url": "https://api.github.com/users/falali009/repos",
"events_url": "https://api.github.com/users/falali009/events{/privacy}",
"received_events_url": "https://api.github.com/users/falali009/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T07:02:48 | 2025-07-18T08:02:48 | 2025-07-18T08:02:48 | NONE | null | null | null | null | ### System Info
error info is:
Joy_caption_two_advanced
`.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
# ComfyUI Error Report
## Error Details
- **Node ID:** 64
- **Node Type:** Joy_caption_two_advanced
- **Exception Type:** ValueError
- **Exception Message:** `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
## Stack Trace
```
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 349, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 224, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 196, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 185, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_slk_joy_caption_two/joy_caption_two_node.py", line 546, in generate
text_model = joy_two_pipeline.llm.load_llm_model()
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_slk_joy_caption_two/joy_caption_two_node.py", line 177, in load_llm_model
text_model = AutoModelForCausalLM.from_pretrained(text_model_path,
File "/home/falali/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/falali/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4015, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/falali/.local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 501, in dispatch_model
model.to(device)
File "/home/falali/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2861, in to
raise ValueError(
```
## System Information
- **ComfyUI Version:** 0.3.40
- **Arguments:** main.py --listen 10.201.10.37
- **OS:** posix
- **Python Version:** 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
- **Embedded Python:** false
- **PyTorch Version:** 2.6.0+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 25386352640
- **VRAM Free:** 23902150840
- **Torch VRAM Total:** 1006632960
- **Torch VRAM Free:** 73588920
## Logs
```
2025-06-10T14:58:02.194566 -
2025-06-10T14:58:02.239783 - [34m[ComfyUI-Easy-Use] server: [0mv1.2.8 [92mLoaded[0m2025-06-10T14:58:02.239806 -
2025-06-10T14:58:02.239820 - [34m[ComfyUI-Easy-Use] web root: [0m/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-easy-use/web_version/v2 [92mLoaded[0m2025-06-10T14:58:02.239829 -
2025-06-10T14:58:02.248932 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/nodes.py", line 2124, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-tensorops/__init__.py", line 1, in <module>
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-tensorops/nodes/__init__.py", line 6, in <module>
from .save_to_s3 import SaveImageToS3
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-tensorops/nodes/save_to_s3.py", line 2, in <module>
import boto3
ModuleNotFoundError: No module named 'boto3'
2025-06-10T14:58:02.249078 - Cannot import /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-tensorops module for custom nodes: No module named 'boto3'
2025-06-10T14:58:02.264805 - [36;20m[/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfy-mtb] | INFO -> loaded [96m94[0m nodes successfuly[0m
2025-06-10T14:58:02.264980 - [36;20m[/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfy-mtb] | INFO -> Some nodes (2) could not be loaded. This can be ignored, but go to http://10.201.10.37:8188/mtb if you want more information.[0m
2025-06-10T14:58:02.266087 - ### Loading: ComfyUI-Impact-Pack (V8.8.1)2025-06-10T14:58:02.266099 -
2025-06-10T14:58:02.276373 - [Impact Pack] Wildcards loading done.2025-06-10T14:58:02.276396 -
2025-06-10T14:58:02.278422 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-06-10T14:58:02.278436 -
2025-06-10T14:58:02.324117 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/nodes.py", line 2124, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-crewai/__init__.py", line 1, in <module>
from .nodes.llm_ollama_node import LlmOllama
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-crewai/nodes/llm_ollama_node.py", line 2, in <module>
from crewai import LLM
ModuleNotFoundError: No module named 'crewai'
2025-06-10T14:58:02.324206 - Cannot import /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-crewai module for custom nodes: No module named 'crewai'
2025-06-10T14:58:02.346286 - ComfyUI found: /home/falali/flux/ComfyUI/ComfyUI2025-06-10T14:58:02.346305 -
2025-06-10T14:58:02.346318 - '/home/falali/flux/ComfyUI/ComfyUI' added to sys.path2025-06-10T14:58:02.346325 -
2025-06-10T14:58:02.442416 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-06-10T14:58:02.528532 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-06-10T14:58:02.532860 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-06-10T14:58:02.734109 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-06-10T14:58:02.805116 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-06-10T14:58:03.165716 - 导入 whisper 时出错:PortAudio library not found2025-06-10T14:58:03.165772 -
2025-06-10T14:58:03.260693 - 导入 movie_editor 时出错:No module named 'moviepy.editor'2025-06-10T14:58:03.260713 -
2025-06-10T14:58:03.316714 - 导入 dall_e 时出错:PortAudio library not found2025-06-10T14:58:03.316825 -
2025-06-10T14:58:04.998892 - llama-cpp installed2025-06-10T14:58:04.998915 -
2025-06-10T14:58:06.475125 - Successfully installed py-cord[voice]2025-06-10T14:58:06.475148 -
2025-06-10T14:58:06.475637 - ### Loading: ComfyUI-Impact-Subpack (V1.2.9)
2025-06-10T14:58:06.499286 - [Impact Subpack] ultralytics_bbox: /home/falali/flux/ComfyUI/ComfyUI/models/ultralytics/bbox
2025-06-10T14:58:06.499332 - [Impact Subpack] ultralytics_segm: /home/falali/flux/ComfyUI/ComfyUI/models/ultralytics/segm
2025-06-10T14:58:06.499576 - ASTERR config loaded successfully2025-06-10T14:58:06.499589 -
2025-06-10T14:58:06.504610 - [36;20m[/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_controlnet_aux] | INFO -> Using ckpts path: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts[0m
2025-06-10T14:58:06.504719 - [36;20m[/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-06-10T14:58:06.504789 - [36;20m[/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-06-10T14:58:06.512948 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2025-06-10T14:58:06.512964 -
2025-06-10T14:58:06.556636 - Web extensions folder found at /home/falali/flux/ComfyUI/ComfyUI/web/extensions/ComfyLiterals2025-06-10T14:58:06.556656 -
2025-06-10T14:58:06.611706 - [34mPony Character Prompt Picker: [92mLoaded[0m2025-06-10T14:58:06.611726 -
2025-06-10T14:58:06.634564 -
[36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m2025-06-10T14:58:06.634583 -
2025-06-10T14:58:06.684678 - [34mWAS Node Suite: [0mBlenderNeko's Advanced CLIP Text Encode found, attempting to enable `CLIPTextEncode` support.[0m2025-06-10T14:58:06.684698 -
2025-06-10T14:58:06.684731 - [34mWAS Node Suite: [0m`CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.[0m2025-06-10T14:58:06.684738 -
2025-06-10T14:58:07.055776 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-06-10T14:58:07.055800 -
2025-06-10T14:58:07.055821 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-06-10T14:58:07.055851 -
2025-06-10T14:58:07.432866 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m221[0m [32mnodes successfully.[0m2025-06-10T14:58:07.432889 -
2025-06-10T14:58:07.432903 -
[3m[93m"Success usually comes to those who are too busy to be looking for it."[0m[3m - Henry David Thoreau[0m
2025-06-10T14:58:07.432910 -
2025-06-10T14:58:07.434387 - pruna_pro not installed, skipping2025-06-10T14:58:07.434400 -
2025-06-10T14:58:07.434557 - Neither pruna_pro nor pruna are installed, skipping2025-06-10T14:58:07.434564 -
2025-06-10T14:58:07.434777 - pruna_pro not installed, skipping2025-06-10T14:58:07.434785 -
2025-06-10T14:58:07.434937 - Neither pruna_pro nor pruna are installed, skipping2025-06-10T14:58:07.434945 -
2025-06-10T14:58:07.436459 - Warning: Could not load sageattention: No module named 'sageattention'2025-06-10T14:58:07.436471 -
2025-06-10T14:58:07.436479 - sageattention package is not installed2025-06-10T14:58:07.436484 -
2025-06-10T14:58:07.444099 - ------------------------------------------2025-06-10T14:58:07.444121 -
2025-06-10T14:58:07.444129 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-06-10T14:58:07.444146 -
2025-06-10T14:58:07.444152 - ------------------------------------------2025-06-10T14:58:07.444158 -
2025-06-10T14:58:07.444164 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-06-10T14:58:07.444169 -
2025-06-10T14:58:07.444175 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-06-10T14:58:07.444181 -
2025-06-10T14:58:07.444187 - ------------------------------------------2025-06-10T14:58:07.444193 -
2025-06-10T14:58:07.451426 -
[0;33m[ReActor][0m - [38;5;173mSTATUS[0m - [0;32mRunning v0.6.0-a1 in ComfyUI[0m2025-06-10T14:58:07.451441 -
2025-06-10T14:58:07.462346 - Torch version: 2.6.0+cu1242025-06-10T14:58:07.462361 -
2025-06-10T14:58:07.467590 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2025-06-10T14:58:07.467603 -
2025-06-10T14:58:07.467611 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-06-10T14:58:07.467617 -
2025-06-10T14:58:07.841865 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/nodes.py", line 2124, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem/__init__.py", line 14, in <module>
from .IFPromptImaGENNode import IFPROMPTImaGEN
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem/IFPromptImaGENNode.py", line 13, in <module>
from .send_request import send_request
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem/send_request.py", line 27, in <module>
from .transformers_api import TransformersModelManager
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem/transformers_api.py", line 2, in <module>
from transformers import (
ImportError: cannot import name 'Qwen2VLForConditionalGeneration' from 'transformers' (/home/falali/.local/lib/python3.10/site-packages/transformers/__init__.py)
2025-06-10T14:58:07.842053 - Cannot import /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem module for custom nodes: cannot import name 'Qwen2VLForConditionalGeneration' from 'transformers' (/home/falali/.local/lib/python3.10/site-packages/transformers/__init__.py)
2025-06-10T14:58:07.851922 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/nodes.py", line 2124, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-sam2/__init__.py", line 1, in <module>
from .node import SAM2ModelLoader, GroundingDinoModelLoader, GroundingDinoSAM2Segment, InvertMask, IsMaskEmptyNode
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-sam2/node.py", line 15, in <module>
from sam2.build_sam import build_sam2
ModuleNotFoundError: No module named 'sam2.build_sam'
2025-06-10T14:58:07.852009 - Cannot import /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-sam2 module for custom nodes: No module named 'sam2.build_sam'
2025-06-10T14:58:08.019776 - --------------
2025-06-10T14:58:08.019836 - [91m ### Mixlab Nodes: [93mLoaded
2025-06-10T14:58:08.027065 - json_repair## OK2025-06-10T14:58:08.027081 -
2025-06-10T14:58:08.027700 - ChatGPT.available True
2025-06-10T14:58:08.027824 - edit_mask.available True
2025-06-10T14:58:08.077079 - ## clip_interrogator_model not found: /home/falali/flux/ComfyUI/ComfyUI/models/clip_interrogator/Salesforce/blip-image-captioning-base, pls download from https://huggingface.co/Salesforce/blip-image-captioning-base2025-06-10T14:58:08.077100 -
2025-06-10T14:58:08.077167 - ClipInterrogator.available True
2025-06-10T14:58:08.077335 - ## text_generator_model not found: /home/falali/flux/ComfyUI/ComfyUI/models/prompt_generator/text2image-prompt-generator, pls download from https://huggingface.co/succinctly/text2image-prompt-generator/tree/main2025-06-10T14:58:08.077344 -
2025-06-10T14:58:08.077354 - ## zh_en_model not found: /home/falali/flux/ComfyUI/ComfyUI/models/prompt_generator/opus-mt-zh-en, pls download from https://huggingface.co/Helsinki-NLP/opus-mt-zh-en/tree/main2025-06-10T14:58:08.077360 -
2025-06-10T14:58:08.077549 - PromptGenerate.available True
2025-06-10T14:58:08.077574 - ChinesePrompt.available True
2025-06-10T14:58:08.077603 - RembgNode_.available True
2025-06-10T14:58:08.143247 - TripoSR.available
2025-06-10T14:58:08.143461 - MiniCPMNode.available
2025-06-10T14:58:08.169474 - Scenedetect.available
2025-06-10T14:58:08.200629 - FishSpeech.available
2025-06-10T14:58:08.202656 - SenseVoice.available
2025-06-10T14:58:08.211300 - Whisper.available False
2025-06-10T14:58:08.211503 - fal-client## OK2025-06-10T14:58:08.211513 -
2025-06-10T14:58:08.214822 - FalVideo.available
2025-06-10T14:58:08.214865 - [93m -------------- [0m
2025-06-10T14:58:08.218437 - ----------Jake Upgrade Nodes Loaded----------2025-06-10T14:58:08.218450 -
2025-06-10T14:58:08.250690 - Use Proxy:2025-06-10T14:58:08.258488 - 2025-06-10T14:58:08.258569 - http://127.0.0.1:7897/2025-06-10T14:58:08.258674 -
2025-06-10T14:58:08.260241 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/nodes.py", line 2124, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/Bjornulf_custom_nodes/__init__.py", line 74, in <module>
from .ollama_talk import OllamaTalk
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/Bjornulf_custom_nodes/ollama_talk.py", line 15, in <module>
import ollama
File "/home/falali/.local/lib/python3.10/site-packages/ollama/__init__.py", line 40, in <module>
_client = Client()
File "/home/falali/.local/lib/python3.10/site-packages/ollama/_client.py", line 114, in __init__
super().__init__(httpx.Client, host, **kwargs)
File "/home/falali/.local/lib/python3.10/site-packages/ollama/_client.py", line 91, in __init__
self._client = client(
File "/home/falali/.local/lib/python3.10/site-packages/httpx/_client.py", line 693, in __init__
proxy_map = self._get_proxy_map(proxies or proxy, allow_env_proxies)
File "/home/falali/.local/lib/python3.10/site-packages/httpx/_client.py", line 218, in _get_proxy_map
return {
File "/home/falali/.local/lib/python3.10/site-packages/httpx/_client.py", line 219, in <dictcomp>
key: None if url is None else Proxy(url=url)
File "/home/falali/.local/lib/python3.10/site-packages/httpx/_config.py", line 338, in __init__
raise ValueError(f"Unknown scheme for proxy URL {url!r}")
ValueError: Unknown scheme for proxy URL URL('socks://127.0.0.1:7897/')
2025-06-10T14:58:08.260551 - Cannot import /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/Bjornulf_custom_nodes module for custom nodes: Unknown scheme for proxy URL URL('socks://127.0.0.1:7897/')
2025-06-10T14:58:08.263666 - Total VRAM 24210 MB, total RAM 64068 MB
2025-06-10T14:58:08.263705 - pytorch version: 2.6.0+cu124
2025-06-10T14:58:08.263739 - xformers version: 0.0.29.post3
2025-06-10T14:58:08.263789 - Set vram state to: NORMAL_VRAM
2025-06-10T14:58:08.263845 - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
2025-06-10T14:58:08.668338 -
Import times for custom nodes:
2025-06-10T14:58:08.668409 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/websocket_image_save.py
2025-06-10T14:58:08.668431 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-PonyCharacterPrompt
2025-06-10T14:58:08.668449 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-WanVideoKsampler
2025-06-10T14:58:08.668465 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_AdvancedRefluxControl
2025-06-10T14:58:08.668480 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-img2drawingassistants
2025-06-10T14:58:08.668494 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_ttp_toolset
2025-06-10T14:58:08.668507 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-inpaint-cropandstitch
2025-06-10T14:58:08.668521 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/cg-use-everywhere
2025-06-10T14:58:08.668535 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/asterr
2025-06-10T14:58:08.668549 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_ADV_CLIP_emb
2025-06-10T14:58:08.668562 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_faceanalysis
2025-06-10T14:58:08.668576 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-UltimateSDUpscale-GGUF
2025-06-10T14:58:08.668589 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-wd14-tagger
2025-06-10T14:58:08.668601 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-YOLO
2025-06-10T14:58:08.668615 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_zenid
2025-06-10T14:58:08.668628 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/aurasr-comfyui
2025-06-10T14:58:08.668642 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_instantid
2025-06-10T14:58:08.668655 - 0.0 seconds (IMPORT FAILED): /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-sam2
2025-06-10T14:58:08.668672 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-GGUF
2025-06-10T14:58:08.668690 - 0.0 seconds (IMPORT FAILED): /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-crewai
2025-06-10T14:58:08.668704 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-template-loader
2025-06-10T14:58:08.668721 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyLiterals
2025-06-10T14:58:08.668735 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfy-image-saver
2025-06-10T14:58:08.668747 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_pruna
2025-06-10T14:58:08.668760 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-custom-scripts
2025-06-10T14:58:08.668773 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/teacache
2025-06-10T14:58:08.668785 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-Jjk-Nodes
2025-06-10T14:58:08.668797 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_patches_ll
2025-06-10T14:58:08.668810 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-QualityOfLifeSuit_Omar92
2025-06-10T14:58:08.668822 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-WanSeamlessFlow
2025-06-10T14:58:08.668838 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_essentials
2025-06-10T14:58:08.668853 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-advancedliveportrait
2025-06-10T14:58:08.668891 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/derfuu_comfyui_moddednodes
2025-06-10T14:58:08.668903 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-frame-interpolation
2025-06-10T14:58:08.668916 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-brushnet
2025-06-10T14:58:08.668929 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-segment-anything-2
2025-06-10T14:58:08.668943 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/rgthree-comfy
2025-06-10T14:58:08.668955 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-jakeupgrade
2025-06-10T14:58:08.668969 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-advanced-controlnet
2025-06-10T14:58:08.668982 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_slk_joy_caption_two
2025-06-10T14:58:08.668995 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-WanVideoWrapper
2025-06-10T14:58:08.669008 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2025-06-10T14:58:08.669020 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/gguf
2025-06-10T14:58:08.669033 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_ipadapter_plus
2025-06-10T14:58:08.669045 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes-main
2025-06-10T14:58:08.669058 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI_MiniCPM-V-2_6-int4
2025-06-10T14:58:08.669070 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_controlnet_aux
2025-06-10T14:58:08.669082 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/efficiency-nodes-comfyui
2025-06-10T14:58:08.669094 - 0.0 seconds (IMPORT FAILED): /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-tensorops
2025-06-10T14:58:08.669108 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-impact-pack
2025-06-10T14:58:08.669123 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_segment_anything
2025-06-10T14:58:08.669135 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-Crystools
2025-06-10T14:58:08.669147 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/Comfyui_Redux_Advanced
2025-06-10T14:58:08.669160 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-kjnodes
2025-06-10T14:58:08.669173 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfy-mtb
2025-06-10T14:58:08.669185 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-reactor
2025-06-10T14:58:08.669198 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_layerstyle
2025-06-10T14:58:08.669211 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-to-python-extension
2025-06-10T14:58:08.669223 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-impact-subpack
2025-06-10T14:58:08.669236 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-videohelpersuite
2025-06-10T14:58:08.669248 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-manager
2025-06-10T14:58:08.669260 - 0.0 seconds (IMPORT FAILED): /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/Bjornulf_custom_nodes
2025-06-10T14:58:08.669273 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-easy-use
2025-06-10T14:58:08.669286 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced
2025-06-10T14:58:08.669298 - 0.0 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/pulid_comfyui
2025-06-10T14:58:08.669310 - 0.1 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-florence2
2025-06-10T14:58:08.669324 - 0.2 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-dynamicprompts
2025-06-10T14:58:08.669338 - 0.3 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/ComfyUI-BrushNet-Wrapper
2025-06-10T14:58:08.669352 - 0.4 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes
2025-06-10T14:58:08.669366 - 0.4 seconds (IMPORT FAILED): /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-if_ai_promptimagem
2025-06-10T14:58:08.669381 - 0.4 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/bjornulf_custom_nodes
2025-06-10T14:58:08.669395 - 0.8 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/was-node-suite-comfyui
2025-06-10T14:58:08.669409 - 2.2 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_pulid_flux_ll
2025-06-10T14:58:08.669421 - 4.1 seconds: /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_llm_party
2025-06-10T14:58:08.669435 -
2025-06-10T14:58:08.679847 - ********** ERROR ***********
comfyui-workflow-templates is not installed.
Please install the updated requirements.txt file by running:
/usr/bin/python3 -m pip install -r /home/falali/flux/ComfyUI/ComfyUI/requirements.txt
This error is happening because the ComfyUI frontend is no longer shipped as part of the main repo but as a pip package instead.
If you are on the portable package you can run: update\update_comfyui.bat to solve this problem
********** ERROR ***********
2025-06-10T14:58:08.680055 - comfyui-embedded-docs package not found
2025-06-10T14:58:08.680541 - Starting server
2025-06-10T14:58:08.680685 - To see the GUI go to: http://10.201.10.37:8188
2025-06-10T14:58:09.124814 - FETCH ComfyRegistry Data: 5/882025-06-10T14:58:09.124914 -
2025-06-10T14:58:09.626135 - Use Proxy:2025-06-10T14:58:09.626214 - 2025-06-10T14:58:09.626245 - http://127.0.0.1:7897/2025-06-10T14:58:09.626277 -
2025-06-10T14:58:11.038091 - Use Proxy:2025-06-10T14:58:11.038172 - 2025-06-10T14:58:11.038204 - http://127.0.0.1:7897/2025-06-10T14:58:11.038238 -
2025-06-10T14:58:12.399041 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/photoswipe-lightbox.esm.min.js2025-06-10T14:58:12.399076 -
2025-06-10T14:58:12.408340 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/pickr.min.js2025-06-10T14:58:12.408379 -
2025-06-10T14:58:12.433202 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/photoswipe.min.css2025-06-10T14:58:12.433262 -
2025-06-10T14:58:12.453382 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/classic.min.css2025-06-10T14:58:12.453441 -
2025-06-10T14:58:12.478580 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/model-viewer.min.js2025-06-10T14:58:12.478595 -
2025-06-10T14:58:12.480704 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/juxtapose.css2025-06-10T14:58:12.480717 -
2025-06-10T14:58:12.480836 - /home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui-mixlab-nodes/webApp/lib/juxtapose.min.js2025-06-10T14:58:12.480845 -
2025-06-10T14:58:12.488751 - Use Proxy:2025-06-10T14:58:12.488764 - 2025-06-10T14:58:12.488771 - http://127.0.0.1:7897/2025-06-10T14:58:12.488777 -
2025-06-10T14:58:12.571749 - [33mQualityOfLifeSuit_Omar92:[0m:NSP ready2025-06-10T14:58:12.571767 -
2025-06-10T14:58:13.827379 - Use Proxy:2025-06-10T14:58:13.827462 - 2025-06-10T14:58:13.827504 - http://127.0.0.1:7897/2025-06-10T14:58:13.827540 -
2025-06-10T14:58:15.157581 - Use Proxy:2025-06-10T14:58:15.157661 - 2025-06-10T14:58:15.157694 - http://127.0.0.1:7897/2025-06-10T14:58:15.157727 -
2025-06-10T14:58:15.998713 - FETCH ComfyRegistry Data: 10/882025-06-10T14:58:15.998791 -
2025-06-10T14:58:16.499447 - Use Proxy:2025-06-10T14:58:16.499517 - 2025-06-10T14:58:16.499551 - http://127.0.0.1:7897/2025-06-10T14:58:16.499585 -
2025-06-10T14:58:17.272994 - got prompt
2025-06-10T14:58:17.317800 - Using xformers attention in VAE
2025-06-10T14:58:17.318617 - Using xformers attention in VAE
2025-06-10T14:58:17.451959 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-06-10T14:58:17.479966 - /home/falali/flux/ComfyUI/ComfyUI/models/clip/siglip-so400m-patch14-3842025-06-10T14:58:17.479993 -
2025-06-10T14:58:17.808593 - Use Proxy:2025-06-10T14:58:17.808630 - 2025-06-10T14:58:17.808647 - http://127.0.0.1:7897/2025-06-10T14:58:17.808662 -
2025-06-10T14:58:18.169537 - Loading VLM's custom vision model2025-06-10T14:58:18.169562 -
2025-06-10T14:58:18.767697 - Prompt: Write a long descriptive caption for this image in a formal tone. If there is a person/character in the image you must refer to them as .2025-06-10T14:58:18.767724 -
2025-06-10T14:58:18.814337 - Requested to load SiglipVisionTransformer
2025-06-10T14:58:18.890225 - loaded completely 9.5367431640625e+25 809.1729736328125 True
2025-06-10T14:58:19.004774 - Requested to load ImageAdapter
2025-06-10T14:58:19.008798 - loaded completely 9.5367431640625e+25 41.0390625 True
2025-06-10T14:58:19.010460 - Loading tokenizer2025-06-10T14:58:19.010476 -
2025-06-10T14:58:19.134628 - Use Proxy:2025-06-10T14:58:19.134649 - 2025-06-10T14:58:19.134656 - http://127.0.0.1:7897/2025-06-10T14:58:19.134663 -
2025-06-10T14:58:19.135599 - Loading LLM: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-06-10T14:58:19.135627 -
2025-06-10T14:58:19.135641 - /home/falali/flux/ComfyUI/ComfyUI/models/LLM/Meta-Llama-3.1-8B-Instruct-bnb-4bit2025-06-10T14:58:19.135648 -
2025-06-10T14:58:19.135803 - Successfully modified 'base_model_name_or_path' value in '/home/falali/flux/ComfyUI/ComfyUI/models/Joy_caption_two/text_model/adapter_config.json'.2025-06-10T14:58:19.135813 -
2025-06-10T14:58:20.443970 - Use Proxy:2025-06-10T14:58:20.444002 - 2025-06-10T14:58:20.444009 - http://127.0.0.1:7897/2025-06-10T14:58:20.444028 -
2025-06-10T14:58:21.037028 - !!! Exception during processing !!! `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
2025-06-10T14:58:21.037817 - Traceback (most recent call last):
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 349, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 224, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 196, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/falali/flux/ComfyUI/ComfyUI/execution.py", line 185, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_slk_joy_caption_two/joy_caption_two_node.py", line 546, in generate
text_model = joy_two_pipeline.llm.load_llm_model()
File "/home/falali/flux/ComfyUI/ComfyUI/custom_nodes/comfyui_slk_joy_caption_two/joy_caption_two_node.py", line 177, in load_llm_model
text_model = AutoModelForCausalLM.from_pretrained(text_model_path,
File "/home/falali/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/falali/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4015, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/falali/.local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 501, in dispatch_model
model.to(device)
File "/home/falali/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2861, in to
raise ValueError(
ValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
2025-06-10T14:58:21.038033 - Prompt executed in 3.76 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Who can help?

### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

### Expected behavior
hope it run | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38717/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38717/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38716/comments | https://api.github.com/repos/huggingface/transformers/issues/38716/events | https://github.com/huggingface/transformers/pull/38716 | 3,132,240,766 | PR_kwDOCUB6oc6ZycUO | 38,716 | Fix #38709: Add support for return_last_hidden_state in get_video_features | {
"login": "babyfox1306",
"id": 155075090,
"node_id": "U_kgDOCT5CEg",
"avatar_url": "https://avatars.githubusercontent.com/u/155075090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/babyfox1306",
"html_url": "https://github.com/babyfox1306",
"followers_url": "https://api.github.com/users/babyfox1306/followers",
"following_url": "https://api.github.com/users/babyfox1306/following{/other_user}",
"gists_url": "https://api.github.com/users/babyfox1306/gists{/gist_id}",
"starred_url": "https://api.github.com/users/babyfox1306/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/babyfox1306/subscriptions",
"organizations_url": "https://api.github.com/users/babyfox1306/orgs",
"repos_url": "https://api.github.com/users/babyfox1306/repos",
"events_url": "https://api.github.com/users/babyfox1306/events{/privacy}",
"received_events_url": "https://api.github.com/users/babyfox1306/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-10T05:56:37 | 2025-06-10T12:09:58 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38716",
"html_url": "https://github.com/huggingface/transformers/pull/38716",
"diff_url": "https://github.com/huggingface/transformers/pull/38716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38716.patch",
"merged_at": null
} | This PR adds support for returning the last_hidden_state in the get_video_features method of XCLIPModel, as requested in #38709.
## ✅ Summary of changes
Added a new class `XCLIPVideoFeatureOutput` inheriting from `BaseModelOutput`, including:
- `pooled_output`
- Optional `last_hidden_state`
- Optional `hidden_states`
- Optional `attentions`
Modified `get_video_features`:
- New argument `return_last_hidden_state` (default False)
- Now supports `return_dict=True` and optional returns
- Keeps backward compatibility (default behavior unchanged)
- Updated the docstring to reflect the new usage options
Added unit tests to cover:
- Default output
- Output with `return_last_hidden_state=True`
- Output with `return_dict=True`
- Full output with hidden states and attentions
## 💡 Example Usage
```python
# Default usage (returns pooled_output only)
pooled_output = model.get_video_features(pixel_values)
# Get last_hidden_state instead of pooled_output
last_hidden_state = model.get_video_features(
pixel_values,
return_last_hidden_state=True
)
# Get all outputs
outputs = model.get_video_features(
pixel_values,
return_last_hidden_state=True,
return_dict=True,
output_hidden_states=True,
output_attentions=True
)
print(outputs.last_hidden_state.shape)
print(outputs.hidden_states[-1].shape)
```
This change improves the flexibility of the XCLIPModel without breaking any existing functionality, and it adds support for feature extraction and analysis use cases. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38716/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38715/comments | https://api.github.com/repos/huggingface/transformers/issues/38715/events | https://github.com/huggingface/transformers/pull/38715 | 3,132,071,175 | PR_kwDOCUB6oc6Zx4N9 | 38,715 | Fix L270 - hasattr("moe_args") returning False error | {
"login": "wjdghks950",
"id": 28642466,
"node_id": "MDQ6VXNlcjI4NjQyNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/28642466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wjdghks950",
"html_url": "https://github.com/wjdghks950",
"followers_url": "https://api.github.com/users/wjdghks950/followers",
"following_url": "https://api.github.com/users/wjdghks950/following{/other_user}",
"gists_url": "https://api.github.com/users/wjdghks950/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wjdghks950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wjdghks950/subscriptions",
"organizations_url": "https://api.github.com/users/wjdghks950/orgs",
"repos_url": "https://api.github.com/users/wjdghks950/repos",
"events_url": "https://api.github.com/users/wjdghks950/events{/privacy}",
"received_events_url": "https://api.github.com/users/wjdghks950/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T04:04:54 | 2025-07-16T09:46:32 | 2025-07-16T09:45:59 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38715",
"html_url": "https://github.com/huggingface/transformers/pull/38715",
"diff_url": "https://github.com/huggingface/transformers/pull/38715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38715.patch",
"merged_at": "2025-07-16T09:45:59"
} | Hi, I've been trying to convert the Llama-4 weights to HF using the `convert_llama4_weights_to_hf.py` and I've been having an error that causes the `num_experts` to be set to 0 due to the problem in L270 as referenced below:
https://github.com/huggingface/transformers/blame/81799d8b556b3c810ed314187674bc439c0582b4/src/transformers/models/llama4/convert_llama4_weights_to_hf.py#L270
The `if hasattr(params, "moe_args")` in L270 always returns false since "moe_args" is a key of the params dict (not an attribute). I changed the line to `if params.get("moe_args", None):` and the error is gone. I do recommend the change. Thank you.
Tag: @ArthurZucker
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38715/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38714/comments | https://api.github.com/repos/huggingface/transformers/issues/38714/events | https://github.com/huggingface/transformers/pull/38714 | 3,132,060,342 | PR_kwDOCUB6oc6Zx18O | 38,714 | Add rounding error check to _maybe_log_save_evaluate | {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T03:56:28 | 2025-09-22T16:26:29 | 2025-09-22T16:26:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38714",
"html_url": "https://github.com/huggingface/transformers/pull/38714",
"diff_url": "https://github.com/huggingface/transformers/pull/38714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38714.patch",
"merged_at": null
} | # What does this PR do?
This PR aims to address the issue raised in #38032 by deciding the format in which the loss should be logged
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#38032
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline] (https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section? Yes
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/38032#issuecomment-2953713166
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). Yes
- [ ] Did you write any new necessary tests? No
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @SunMarc
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38714/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38713/comments | https://api.github.com/repos/huggingface/transformers/issues/38713/events | https://github.com/huggingface/transformers/pull/38713 | 3,132,002,420 | PR_kwDOCUB6oc6ZxppB | 38,713 | [1/N] Use list,tuple,dict for typing | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T03:07:55 | 2025-06-17T23:10:24 | 2025-06-17T23:10:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38713",
"html_url": "https://github.com/huggingface/transformers/pull/38713",
"diff_url": "https://github.com/huggingface/transformers/pull/38713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38713.patch",
"merged_at": null
} | # What does this PR do?
Replace typing.{List,Tuple,Dict} with {list, tuple,dict}. Due to the large amount of all changes, they are split into smaller PRs. | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38713/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38712/comments | https://api.github.com/repos/huggingface/transformers/issues/38712/events | https://github.com/huggingface/transformers/pull/38712 | 3,131,974,796 | PR_kwDOCUB6oc6Zxj4Z | 38,712 | Use OSError | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T02:46:03 | 2025-07-17T13:41:40 | 2025-06-10T12:13:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38712",
"html_url": "https://github.com/huggingface/transformers/pull/38712",
"diff_url": "https://github.com/huggingface/transformers/pull/38712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38712.patch",
"merged_at": "2025-06-10T12:13:49"
} | # What does this PR do?
Gradually apply Python 3.9 syntax to existing files. This PR changes OSError aliases to OSError. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38712/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38711/comments | https://api.github.com/repos/huggingface/transformers/issues/38711/events | https://github.com/huggingface/transformers/pull/38711 | 3,131,954,410 | PR_kwDOCUB6oc6ZxflC | 38,711 | Updated moonshine modelcard | {
"login": "SohamPrabhu",
"id": 62270341,
"node_id": "MDQ6VXNlcjYyMjcwMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/62270341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SohamPrabhu",
"html_url": "https://github.com/SohamPrabhu",
"followers_url": "https://api.github.com/users/SohamPrabhu/followers",
"following_url": "https://api.github.com/users/SohamPrabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/SohamPrabhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SohamPrabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SohamPrabhu/subscriptions",
"organizations_url": "https://api.github.com/users/SohamPrabhu/orgs",
"repos_url": "https://api.github.com/users/SohamPrabhu/repos",
"events_url": "https://api.github.com/users/SohamPrabhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SohamPrabhu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-10T02:30:15 | 2025-06-13T03:08:20 | 2025-06-12T17:27:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38711",
"html_url": "https://github.com/huggingface/transformers/pull/38711",
"diff_url": "https://github.com/huggingface/transformers/pull/38711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38711.patch",
"merged_at": "2025-06-12T17:27:18"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR updates the moonshine model card so that all the model cards can be fixed. Shortens the model description. Examples of pipeline and automodel usage. CLI was not included because audio model don't work with the cli. Quantization was not applied and AttentionMaskVisualizer does not support moonshine
#36979
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu Or anyone that works with the documentation
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38711/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38710/comments | https://api.github.com/repos/huggingface/transformers/issues/38710/events | https://github.com/huggingface/transformers/issues/38710 | 3,131,865,749 | I_kwDOCUB6oc66rHqV | 38,710 | There is no transformers version that can run DeepSeek V3 generate | {
"login": "pbelevich",
"id": 1160355,
"node_id": "MDQ6VXNlcjExNjAzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1160355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pbelevich",
"html_url": "https://github.com/pbelevich",
"followers_url": "https://api.github.com/users/pbelevich/followers",
"following_url": "https://api.github.com/users/pbelevich/following{/other_user}",
"gists_url": "https://api.github.com/users/pbelevich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pbelevich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pbelevich/subscriptions",
"organizations_url": "https://api.github.com/users/pbelevich/orgs",
"repos_url": "https://api.github.com/users/pbelevich/repos",
"events_url": "https://api.github.com/users/pbelevich/events{/privacy}",
"received_events_url": "https://api.github.com/users/pbelevich/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T01:17:27 | 2025-07-15T10:07:36 | 2025-07-15T10:07:36 | CONTRIBUTOR | null | null | null | null | ### System Info
DeepSeek V3 generate uses get_max_length(removed in 4.49.0) and fp8 quantization(introduced in 4.49.0)
1. DeepSeek 671B models use: past_key_values.get_max_length:
https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/modeling_deepseek.py#L1654
https://huggingface.co/deepseek-ai/DeepSeek-R1/blob/main/modeling_deepseek.py#L1654
It was removed in 4.49.0: https://github.com/huggingface/transformers/commit/80dbbd103c217f422de91a3265bf6d8e8bc414f7
2. DeepSeek 671B models use FP8 quantization:
https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/config.json#L40
https://huggingface.co/deepseek-ai/DeepSeek-R1/blob/main/config.json#L40
It was introduced in 4.49.0: https://github.com/huggingface/transformers/commit/efe72fe21f4292e4f3a74344c0a065dc69480b3b
cc @ArthurZucker
### Who can help?
< 4.49.0:
```
File "/opt/miniconda3/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 3647, in from_pretrained
config.quantization_config = AutoHfQuantizer.merge_quantization_configs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/lib/python3.12/site-packages/transformers/quantizers/auto.py", line 173, in merge_quantization_configs
quantization_config = AutoQuantizationConfig.from_dict(quantization_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/lib/python3.12/site-packages/transformers/quantizers/auto.py", line 97, in from_dict
raise ValueError(
ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet']
```
>= 4.49.0:
```
File "/opt/miniconda3/lib/python3.12/site-packages/transformers/generation/utils.py", line 3550, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsxl/belevich/.cache/huggingface/modules/transformers_modules/deepseek-ai/DeepSeek-R1/56d4cbbb4d29f4355bab4b9a39ccb717a14ad5ad/modeling_deepseek.py", line 1654, in prepare_inputs_for_generation
max_cache_length = past_key_values.get_max_length()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'DynamicCache' object has no attribute 'get_max_length'. Did you mean: 'get_seq_length'?
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
call model.generate()
### Expected behavior
have transformers version that can run deepseek v3 generate | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38710/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38709/comments | https://api.github.com/repos/huggingface/transformers/issues/38709/events | https://github.com/huggingface/transformers/issues/38709 | 3,131,836,036 | I_kwDOCUB6oc66rAaE | 38,709 | `get_video_features` in XCLIPModel always returns `pooled_output` | {
"login": "Vishu26",
"id": 24605821,
"node_id": "MDQ6VXNlcjI0NjA1ODIx",
"avatar_url": "https://avatars.githubusercontent.com/u/24605821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vishu26",
"html_url": "https://github.com/Vishu26",
"followers_url": "https://api.github.com/users/Vishu26/followers",
"following_url": "https://api.github.com/users/Vishu26/following{/other_user}",
"gists_url": "https://api.github.com/users/Vishu26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vishu26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vishu26/subscriptions",
"organizations_url": "https://api.github.com/users/Vishu26/orgs",
"repos_url": "https://api.github.com/users/Vishu26/repos",
"events_url": "https://api.github.com/users/Vishu26/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vishu26/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-10T00:51:37 | 2025-07-18T08:02:50 | 2025-07-18T08:02:50 | NONE | null | null | null | null | ### System Info
https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/models/x_clip/modeling_x_clip.py#L1376
Hi
The `get_video_features` function is hardcoded to always return the `pooled_output`. But sometimes, it might be beneficial to get the `last_hidden_state` instead. Can we fix this behavior?
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```import av
import torch
import numpy as np
from transformers import AutoProcessor, AutoModel
from huggingface_hub import hf_hub_download
np.random.seed(0)
def read_video_pyav(container, indices):
'''
Decode the video with PyAV decoder.
Args:
container (`av.container.input.InputContainer`): PyAV container.
indices (`List[int]`): List of frame indices to decode.
Returns:
result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
'''
frames = []
container.seek(0)
start_index = indices[0]
end_index = indices[-1]
for i, frame in enumerate(container.decode(video=0)):
if i > end_index:
break
if i >= start_index and i in indices:
frames.append(frame)
return np.stack([x.to_ndarray(format="rgb24") for x in frames])
def sample_frame_indices(clip_len, frame_sample_rate, seg_len):
'''
Sample a given number of frame indices from the video.
Args:
clip_len (`int`): Total number of frames to sample.
frame_sample_rate (`int`): Sample every n-th frame.
seg_len (`int`): Maximum allowed index of sample's last frame.
Returns:
indices (`List[int]`): List of sampled frame indices
'''
converted_len = int(clip_len * frame_sample_rate)
end_idx = np.random.randint(converted_len, seg_len)
start_idx = end_idx - converted_len
indices = np.linspace(start_idx, end_idx, num=clip_len)
indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)
return indices
# video clip consists of 300 frames (10 seconds at 30 FPS)
file_path = hf_hub_download(
repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset"
)
container = av.open(file_path)
# sample 8 frames
indices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)
video = read_video_pyav(container, indices)
processor = AutoProcessor.from_pretrained("microsoft/xclip-base-patch32")
model = AutoModel.from_pretrained("microsoft/xclip-base-patch32")
inputs = processor(
videos=list(video),
return_tensors="pt",
padding=True,
)
# forward pass
with torch.no_grad():
outputs = model.get_video_features(**inputs)
print(outputs.shape)
### Expected behavior
The `get_video_features` function should have the option to output the `last_hidden_state` as well. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38709/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38708/comments | https://api.github.com/repos/huggingface/transformers/issues/38708/events | https://github.com/huggingface/transformers/issues/38708 | 3,131,768,212 | I_kwDOCUB6oc66qv2U | 38,708 | Bert2D: A 2D-Word Embedding Model for Morphologically Rich Languages | {
"login": "yigit353",
"id": 852489,
"node_id": "MDQ6VXNlcjg1MjQ4OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/852489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yigit353",
"html_url": "https://github.com/yigit353",
"followers_url": "https://api.github.com/users/yigit353/followers",
"following_url": "https://api.github.com/users/yigit353/following{/other_user}",
"gists_url": "https://api.github.com/users/yigit353/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yigit353/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yigit353/subscriptions",
"organizations_url": "https://api.github.com/users/yigit353/orgs",
"repos_url": "https://api.github.com/users/yigit353/repos",
"events_url": "https://api.github.com/users/yigit353/events{/privacy}",
"received_events_url": "https://api.github.com/users/yigit353/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2025-06-09T23:58:41 | 2025-06-10T00:01:31 | null | NONE | null | null | null | null | ### Model description
**Bert2D** is a novel transformer-based model that builds upon the `BertModel` architecture by introducing a two-dimensional word embedding system. This enhancement is specifically designed to improve performance on morphologically rich languages, such as Turkish, Finnish, and Hungarian. This model card describes the initial release, which includes the model implementation and a pretrained checkpoint for Turkish.
This work is based on the research outlined in the paper **"Bert2D: A 2D-Word Embedding for Morphologically Rich Languages"**, which has been accepted by IEEE and is available at: [https://ieeexplore.ieee.org/document/10542953](https://ieeexplore.ieee.org/document/10542953).
A pretrained model for Turkish, `Bert2D-cased-Turkish-128K-WWM-NSW2`, is available on the Hugging Face Hub at: [https://huggingface.co/yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2](https://huggingface.co/yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2)
## Model Description
The primary innovation of **Bert2D** is its use of a 2D positional embedding mechanism to better capture the complex morphological structures present in agglutinative languages. Unlike standard BERT models that use a 1D positional embedding, Bert2D employs a dual system:
1. **Whole-Word Positional Embeddings (1st Dimension):** This captures the absolute position of each word in a sequence.
2. **Sub-word Relative Positional Embeddings (2nd Dimension):** This encodes the relative position of sub-words within each word, allowing the model to distinguish between the beginning, middle, and end of a word's sub-tokens.
This two-dimensional approach provides a more nuanced representation of meaning by enabling the model to understand the relationships between words and their constituent morphemes. The model also incorporates **Whole Word Masking (WWM)**, a training technique where all sub-tokens corresponding to a single word are masked, encouraging the model to learn deeper contextual relationships.
### Architectural Innovations
The key components introduced in this release are:
* **`Bert2DModel`**: A new model class inheriting from `BertPreTrainedModel` that implements the 2D embedding logic. The core modifications are in the embeddings layer to accommodate the dual positional encoding.
* **`Bert2DTokenizer` and `Bert2DTokenizerFast`**: Custom tokenizers compatible with the `Bert2D` model.
* **Model Variants**: Includes all standard BERT architecture variants, such as:
* `Bert2DForMaskedLM`
* `Bert2DForSequenceClassification`
* `Bert2DForTokenClassification`
* `Bert2DForQuestionAnswering`
### Configuration Parameters
The `Bert2DConfig` includes new parameters to manage the 2D embeddings:
* `max_word_position_embeddings`: Defines the maximum number of words (not sub-tokens) the model can handle in a sequence. The default is `512`.
* `max_intermediate_subword_position_embeddings`: Specifies the embedding value for intermediate sub-tokens within a word. For the `NSW2` strategy, this is set to `2`.
The 2D embeddings are summed with the token and segment embeddings before being passed to the Transformer layers. The parameter count is nearly identical to a standard BERT model; the `128K` in the checkpoint name refers to the vocabulary size.
### Example Usage
The `Bert2D` model can be easily used with the `pipeline` API for tasks such as `fill-mask`.
```python
from transformers import pipeline
# Initialize the fill-mask pipeline with the Bert2D model
fill_mask_pipe = pipeline("fill-mask", model="yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2")
# Example usage
masked_sentence = "Adamın mesleği [MASK] midir acaba?"
predictions = fill_mask_pipe(masked_sentence)
# Print the top predictions
for prediction in predictions:
print(f"Token: {prediction['token_str']}")
print(f"Sequence: {prediction['sequence']}")
print(f"Score: {prediction['score']:.4f}")
print("-" * 20)
```
**Predicted Output:**
```
Token: mühendis
Sequence: Adamın mesleği mühendis midir acaba?
Score: 0.2393
--------------------
Token: doktor
Sequence: Adamın mesleği doktor midir acaba?
Score: 0.1698
--------------------
Token: asker
Sequence: Adamın mesleği asker midir acaba?
Score: 0.0537
--------------------
Token: memur
Sequence: Adamın mesleği memur midir acaba?
Score: 0.0471
--------------------
Token: öğretmen
Sequence: Adamın mesleği öğretmen midir acaba?
Score: 0.0463
--------------------
```
### Fine-Tuning Considerations
When fine-tuning a `Bert2D` model, it is crucial to use the model's specific configuration. The introduction of `max_word_position_embeddings` and `max_intermediate_subword_position_embeddings` means that standard BERT configuration files are not directly compatible. Ensure that you are using the `Bert2DConfig` and its associated parameters for optimal performance.
### Motivation and Context
Traditional NLP models often struggle with languages that have rich morphology, as the vast number of word forms for a single root makes it difficult for models with 1D positional embeddings to generalize effectively. The **Bert2D** architecture was developed to address this limitation, and initial experiments on Turkish have shown that it consistently outperforms strong monolingual models across a range of downstream tasks.
### Future Work and Contributions
The developers are actively seeking contributions in the following areas:
* **Pretraining on other languages:** Particularly other morphologically complex languages like Finnish, Hungarian, and Korean.
* **Further architectural enhancements.**
* **Downstream task fine-tuning and evaluation.**
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Weights: https://huggingface.co/yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2
Article: [https://ieeexplore.ieee.org/document/10542953](https://ieeexplore.ieee.org/document/10542953)
The PR: #38707 | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38708/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/38707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38707/comments | https://api.github.com/repos/huggingface/transformers/issues/38707/events | https://github.com/huggingface/transformers/pull/38707 | 3,131,509,624 | PR_kwDOCUB6oc6ZwAdO | 38,707 | Introducing Bert2D for Morphologically Rich Languages | {
"login": "yigit353",
"id": 852489,
"node_id": "MDQ6VXNlcjg1MjQ4OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/852489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yigit353",
"html_url": "https://github.com/yigit353",
"followers_url": "https://api.github.com/users/yigit353/followers",
"following_url": "https://api.github.com/users/yigit353/following{/other_user}",
"gists_url": "https://api.github.com/users/yigit353/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yigit353/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yigit353/subscriptions",
"organizations_url": "https://api.github.com/users/yigit353/orgs",
"repos_url": "https://api.github.com/users/yigit353/repos",
"events_url": "https://api.github.com/users/yigit353/events{/privacy}",
"received_events_url": "https://api.github.com/users/yigit353/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T21:15:14 | 2025-06-10T13:47:00 | 2025-06-10T13:47:00 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38707",
"html_url": "https://github.com/huggingface/transformers/pull/38707",
"diff_url": "https://github.com/huggingface/transformers/pull/38707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38707.patch",
"merged_at": null
} | # What does this PR do?
This pull request introduces **Bert2D**, a novel architecture based on `BertModel` that incorporates a two-dimensional word embedding system. This new model is specifically designed to enhance performance on morphologically rich languages, such as Turkish and Finnish. This initial release includes the model implementation and a pretrained checkpoint for Turkish.
This work is based on the research outlined in the paper **"Bert2D: A 2D-Word Embedding for Morphologically Rich Languages"**, which has been accepted by IEEE and is available at: [https://ieeexplore.ieee.org/document/10542953](https://ieeexplore.ieee.org/document/10542953).
A working and pretrained model checkpoint for Turkish is available on the Hugging Face Hub at: [https://huggingface.co/yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2](https://huggingface.co/yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2)
---
### **Description**
The core innovation of **Bert2D** is the introduction of a 2D positional embedding mechanism. Standard BERT models utilize a 1D positional embedding, which can be suboptimal for languages with complex morphological structures and more flexible word order. Bert2D addresses this by employing a dual embedding system:
1. **Whole-Word Positional Embeddings (1st Dimension):** Captures the absolute position of each *word* (not sub-word token) in the sequence.
2. **Sub-word Relative Positional Embeddings (2nd Dimension):** Encodes the relative position of sub-words within each word. This is the key innovation, allowing the model to differentiate between the start, middle, and end sub-tokens of a word.
This two-dimensional approach allows the model to better understand the relationships between words and their constituent morphemes, leading to a more nuanced representation of meaning, which is particularly beneficial for agglutinative languages.
Additionally, this implementation incorporates **Whole Word Masking (WWM)**, a training technique where all sub-tokens corresponding to a single word are masked together. This encourages the model to learn deeper contextual relationships between words.
---
### **Architectural Innovations and Implementation**
This pull request introduces the following key components:
* **`Bert2DModel`**: A new model class that inherits from `BertPreTrainedModel` and implements the 2D embedding logic. The core changes are within the embeddings layer to accommodate the dual positional encoding.
* **`Bert2DTokenizer` and `Bert2DTokenizerFast`**: Custom tokenizer implementations that are compatible with the `Bert2D` model.
* **Model Variants**: Includes all standard variants of the BERT architecture, such as `Bert2DForMaskedLM`, `Bert2DForSequenceClassification`, `Bert2DForTokenClassification`, and `Bert2DForQuestionAnswering`.
* **New Configuration Parameters**: The `Bert2DConfig` introduces new parameters to control the 2D embeddings:
* `max_word_position_embeddings`: An integer that defines the maximum number of *words* (not sub-tokens) the model can process in a single sequence. Defaults to `512`.
* `max_intermediate_subword_position_embeddings`: An integer that defines the embedding value for intermediate sub-tokens within a word. For the `NSW2` strategy, this is set to `2`.
The 2D embeddings are summed with the token and segment embeddings before being passed to the Transformer layers, ensuring seamless integration with the standard BERT architecture. The parameter count is nearly identical to a standard BERT model; the `128K` in the checkpoint name refers to the vocabulary size, not the number of parameters.
---
### **Example Usage**
The `Bert2D` model can be easily used with the `pipeline` API for tasks like `fill-mask`.
```python
from transformers import pipeline
# Initialize the fill-mask pipeline with the Bert2D model
fill_mask_pipe = pipeline("fill-mask", model="yigitbekir/Bert2D-cased-Turkish-128K-WWM-NSW2")
# Example usage
masked_sentence = "Adamın mesleği [MASK] midir acaba?"
predictions = fill_mask_pipe(masked_sentence)
# Print the top predictions
for prediction in predictions:
print(f"Token: {prediction['token_str']}")
print(f"Sequence: {prediction['sequence']}")
print(f"Score: {prediction['score']:.4f}")
print("-" * 20)
```
**Predicted Output:**
```
Token: mühendis
Sequence: Adamın mesleği mühendis midir acaba?
Score: 0.2393
--------------------
Token: doktor
Sequence: Adamın mesleği doktor midir acaba?
Score: 0.1698
--------------------
Token: asker
Sequence: Adamın mesleği asker midir acaba?
Score: 0.0537
--------------------
Token: memur
Sequence: Adamın mesleği memur midir acaba?
Score: 0.0471
--------------------
Token: öğretmen
Sequence: Adamın mesleği öğretmen midir acaba?
Score: 0.0463
--------------------
```
---
### **Fine-Tuning Considerations**
When fine-tuning a `Bert2D` model, users must pay close attention to the model's specific configuration. The introduction of `max_word_position_embeddings` and `max_intermediate_subword_position_embeddings` means that standard BERT configuration files are not directly compatible. Ensure that you are using the `Bert2DConfig` and its associated parameters to achieve correct and optimal performance.
---
### **Motivation and Context**
Languages with rich morphology, like Turkish, Finnish, and Hungarian, pose a significant challenge for traditional NLP models. The vast number of possible word forms for a single root makes it difficult for models with 1D positional embeddings to generalize effectively. The **Bert2D** architecture was developed to directly address this limitation, and our initial experiments on Turkish have shown that it consistently outperforms strong monolingual models across a range of downstream tasks.
---
### **Future Work and Call for Contributions**
We believe that the **Bert2D** architecture holds significant promise for improving NLP performance in a wide range of languages. We are actively seeking contributions in the following areas:
* **Pretraining on other languages:** We are particularly interested in seeing **Bert2D** trained on other morphologically complex languages like Finnish, Hungarian, and Korean.
* **Further architectural enhancements:** We are open to suggestions and improvements to the current architecture.
* **Downstream task fine-tuning and evaluation:** We encourage the community to fine-tune and evaluate **Bert2D** on various downstream tasks and report their findings.
We believe that the addition of **Bert2D** to the Transformers library will be a valuable resource for the community and will spur further research into developing more effective models for a wider range of the world's languages.
Thank you @ArthurZucker
**EDIT**: All tests passed
**EDIT 2**: Open the issue #38708
| {
"login": "yigit353",
"id": 852489,
"node_id": "MDQ6VXNlcjg1MjQ4OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/852489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yigit353",
"html_url": "https://github.com/yigit353",
"followers_url": "https://api.github.com/users/yigit353/followers",
"following_url": "https://api.github.com/users/yigit353/following{/other_user}",
"gists_url": "https://api.github.com/users/yigit353/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yigit353/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yigit353/subscriptions",
"organizations_url": "https://api.github.com/users/yigit353/orgs",
"repos_url": "https://api.github.com/users/yigit353/repos",
"events_url": "https://api.github.com/users/yigit353/events{/privacy}",
"received_events_url": "https://api.github.com/users/yigit353/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38707/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38706/comments | https://api.github.com/repos/huggingface/transformers/issues/38706/events | https://github.com/huggingface/transformers/pull/38706 | 3,131,492,592 | PR_kwDOCUB6oc6Zv8uh | 38,706 | Fix smart resize | {
"login": "rdonggroq",
"id": 210547133,
"node_id": "U_kgDODIyxvQ",
"avatar_url": "https://avatars.githubusercontent.com/u/210547133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rdonggroq",
"html_url": "https://github.com/rdonggroq",
"followers_url": "https://api.github.com/users/rdonggroq/followers",
"following_url": "https://api.github.com/users/rdonggroq/following{/other_user}",
"gists_url": "https://api.github.com/users/rdonggroq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rdonggroq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rdonggroq/subscriptions",
"organizations_url": "https://api.github.com/users/rdonggroq/orgs",
"repos_url": "https://api.github.com/users/rdonggroq/repos",
"events_url": "https://api.github.com/users/rdonggroq/events{/privacy}",
"received_events_url": "https://api.github.com/users/rdonggroq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T21:06:29 | 2025-06-10T08:59:53 | 2025-06-10T08:59:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38706",
"html_url": "https://github.com/huggingface/transformers/pull/38706",
"diff_url": "https://github.com/huggingface/transformers/pull/38706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38706.patch",
"merged_at": "2025-06-10T08:59:22"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This currently throws:
```python
import torch
from transformers import Qwen2VLImageProcessorFast
from transformers.image_utils import ChannelDimension
processor = Qwen2VLImageProcessorFast()
format = ChannelDimension.FIRST
image = torch.zeros((3, 100, 100))
size = {"shortest_edge": 100, "longest_edge": 100}
processor.preprocess(image, input_data_format=format, size=size)
```
<details>
<summary>Error</summary>
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[38], line 8
6 image = torch.zeros((3, 100, 100))
7 size = {"shortest_edge":100, "longest_edge":100}
----> 8 processor.preprocess(image, input_data_format=format, size=size)
File /nix/store/xg7lc4jxm5p199b01nndbfqr3fy4p8g7-python3.10-transformers-4.50.3/lib/python3.10/site-packages/transformers/models/qwen2_vl/image_processing_qwen2_vl_fast.py:397, in Qwen2VLImageProcessorFast.preprocess(self, images, videos, do_resize, size, resample, do_rescale, rescale_factor, do_normalize, image_mean, image_std, min_pixels, max_pixels, patch_size, temporal_patch_size, merge_size, do_convert_rgb, return_tensors, data_format, input_data_format, device, **kwargs)
395 pixel_values, vision_grid_thws = [], []
396 for image in images:
--> 397 patches, image_grid_thw = self._preprocess(
398 image,
399 do_resize=do_resize,
400 size=size,
401 interpolation=interpolation,
402 do_rescale=do_rescale,
403 rescale_factor=rescale_factor,
404 do_normalize=do_normalize,
405 image_mean=image_mean,
406 image_std=image_std,
407 patch_size=patch_size,
408 temporal_patch_size=temporal_patch_size,
409 merge_size=merge_size,
410 do_convert_rgb=do_convert_rgb,
411 input_data_format=input_data_format,
412 device=device,
413 )
414 pixel_values.extend(patches)
415 vision_grid_thws.append(image_grid_thw)
File /nix/store/xg7lc4jxm5p199b01nndbfqr3fy4p8g7-python3.10-transformers-4.50.3/lib/python3.10/site-packages/transformers/models/qwen2_vl/image_processing_qwen2_vl_fast.py:209, in Qwen2VLImageProcessorFast._preprocess(self, images, do_resize, size, interpolation, do_rescale, rescale_factor, do_normalize, image_mean, image_std, patch_size, temporal_patch_size, merge_size, do_convert_rgb, input_data_format, device)
201 if do_resize:
202 resized_height, resized_width = smart_resize(
203 height,
204 width,
(...)
207 max_pixels=size["longest_edge"],
208 )
--> 209 stacked_images = F.resize(
210 stacked_images, size=(resized_height, resized_width), interpolation=interpolation
211 )
212 resized_images_grouped[shape] = stacked_images
213 resized_images = reorder_images(resized_images_grouped, grouped_images_index)
File /nix/store/cifrlch412l6cnpa06qaf8lrqbs47pzh-python3.10-torchvision-0.20.1/lib/python3.10/site-packages/torchvision/transforms/v2/functional/_geometry.py:188, in resize(inpt, size, interpolation, max_size, antialias)
185 _log_api_usage_once(resize)
187 kernel = _get_kernel(resize, type(inpt))
--> 188 return kernel(inpt, size=size, interpolation=interpolation, max_size=max_size, antialias=antialias)
File /nix/store/cifrlch412l6cnpa06qaf8lrqbs47pzh-python3.10-torchvision-0.20.1/lib/python3.10/site-packages/torchvision/transforms/v2/functional/_geometry.py:260, in resize_image(image, size, interpolation, max_size, antialias)
257 if need_cast:
258 image = image.to(dtype=torch.float32)
--> 260 image = interpolate(
261 image,
262 size=[new_height, new_width],
263 mode=interpolation.value,
264 align_corners=align_corners,
265 antialias=antialias,
266 )
268 if need_cast:
269 if interpolation == InterpolationMode.BICUBIC and dtype == torch.uint8:
270 # This path is hit on non-AVX archs, or on GPU.
File /nix/store/93vlnr4hqnr20y3fm9j952p9zrjr3dqp-python3.10-torch-2.5.1/lib/python3.10/site-packages/torch/nn/functional.py:4591, in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)
4589 assert align_corners is not None
4590 if antialias:
-> 4591 return torch._C._nn._upsample_bicubic2d_aa(
4592 input, output_size, align_corners, scale_factors
4593 )
4594 return torch._C._nn.upsample_bicubic2d(
4595 input, output_size, align_corners, scale_factors
4596 )
4598 if input.dim() == 3 and mode == "bilinear":
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 100, W: 100) output (H: 0, W: 0)
```
</details>
This PR incorporates [the fix](https://github.com/QwenLM/Qwen2.5-VL/commit/a30e36facd0a5131d9ed59e93210c7ac5de75adb) from the Qwen repo. It also applies the same patch to Emu3 (replaces https://github.com/huggingface/transformers/pull/38150).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@zucchini-nlp | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38706/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38705/comments | https://api.github.com/repos/huggingface/transformers/issues/38705/events | https://github.com/huggingface/transformers/pull/38705 | 3,131,439,996 | PR_kwDOCUB6oc6ZvxBb | 38,705 | Removing extra space in large command for speech-pretraining example | {
"login": "dggaytan",
"id": 109628982,
"node_id": "U_kgDOBojONg",
"avatar_url": "https://avatars.githubusercontent.com/u/109628982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dggaytan",
"html_url": "https://github.com/dggaytan",
"followers_url": "https://api.github.com/users/dggaytan/followers",
"following_url": "https://api.github.com/users/dggaytan/following{/other_user}",
"gists_url": "https://api.github.com/users/dggaytan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dggaytan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dggaytan/subscriptions",
"organizations_url": "https://api.github.com/users/dggaytan/orgs",
"repos_url": "https://api.github.com/users/dggaytan/repos",
"events_url": "https://api.github.com/users/dggaytan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dggaytan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T20:40:57 | 2025-06-24T12:25:47 | 2025-06-24T12:24:57 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38705",
"html_url": "https://github.com/huggingface/transformers/pull/38705",
"diff_url": "https://github.com/huggingface/transformers/pull/38705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38705.patch",
"merged_at": "2025-06-24T12:24:56"
} | # What does this PR do?
<!--
Adding quotes to the large run command for the speech pretraining example, since it was not being run while copying and pasting.
-->
## Before submitting
- [✅] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [✅] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
cc:
@ArthurZucker
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38705/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38704/comments | https://api.github.com/repos/huggingface/transformers/issues/38704/events | https://github.com/huggingface/transformers/pull/38704 | 3,131,406,222 | PR_kwDOCUB6oc6Zvpbe | 38,704 | Fix `mllama` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T20:27:46 | 2025-06-12T14:15:37 | 2025-06-12T14:15:36 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38704",
"html_url": "https://github.com/huggingface/transformers/pull/38704",
"diff_url": "https://github.com/huggingface/transformers/pull/38704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38704.patch",
"merged_at": "2025-06-12T14:15:36"
} | # What does this PR do?
`test_11b_model_integration_forward` has dtype issue, it fails, and affect other tests (GPU OOM).
I also update the expected values for A10.
All tests pass now on T4/A10 + torch 2.6/2.7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38704/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38703/comments | https://api.github.com/repos/huggingface/transformers/issues/38703/events | https://github.com/huggingface/transformers/pull/38703 | 3,130,548,124 | PR_kwDOCUB6oc6ZssVP | 38,703 | [add-new-model-like] Robust search & proper outer '),' in tokenizer mapping | {
"login": "alexzms",
"id": 26690162,
"node_id": "MDQ6VXNlcjI2NjkwMTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/26690162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexzms",
"html_url": "https://github.com/alexzms",
"followers_url": "https://api.github.com/users/alexzms/followers",
"following_url": "https://api.github.com/users/alexzms/following{/other_user}",
"gists_url": "https://api.github.com/users/alexzms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexzms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexzms/subscriptions",
"organizations_url": "https://api.github.com/users/alexzms/orgs",
"repos_url": "https://api.github.com/users/alexzms/repos",
"events_url": "https://api.github.com/users/alexzms/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexzms/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T14:50:58 | 2025-06-10T13:26:16 | 2025-06-10T12:25:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38703",
"html_url": "https://github.com/huggingface/transformers/pull/38703",
"diff_url": "https://github.com/huggingface/transformers/pull/38703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38703.patch",
"merged_at": "2025-06-10T12:25:12"
} | ### What does this PR do?
This PR makes the `transformers-cli add-new-model-like` command usable again for any model whose tokenizer mapping is written on multiple lines (e.g. llama).
### Current failure modes
1. **`IndexError` while locating `TOKENIZER_MAPPING_NAMES`.**
Inside function `transformers/src/transformers/commands/add_new_model_like.py/insert_tokenizer_in_auto_module()`.
The helper scans for the literal
```python
" TOKENIZER_MAPPING_NAMES = OrderedDict("
```
(four leading spaces, no type annotation).
Since in current version that line is un-indented **and** type-annotated:
```python
TOKENIZER_MAPPING_NAMES = OrderedDict[str, tuple[Optional[str], Optional[str]]](
```
The hard-coded `startswith()` never matches, the loop overruns `lines`, and the command aborts with
```
IndexError: list index out of range
```
2. **Unbalanced parentheses once the above is patched.**
When the mapping tokenizers in `transformers/src/transformers/models/auto/tokenization_auto.py` of which entry spans several lines, the script copies only the inner block ending in
```
),
```
but forgets the outer
```
),
```
line. Insertion borrows this outer `),` from the previous entry, leaving that entry syntactically broken and rendering `tokenization_auto.py` unimportable.
**Fix**
* Replace the fixed-width search with a regex tolerant of any indentation and optional type annotations:
```python
pattern_tokenizer = re.compile(r"^\s*TOKENIZER_MAPPING_NAMES\s*=\s*OrderedDict\b")
```
* While copying a multi-line mapping block, keep collecting until the outer
```python
),
```
line is also captured, ensuring the new block is fully closed before insertion.
No external dependencies are introduced-only the standard-library `re` module is used.
After the patch, running
```bash
transformers-cli add-new-model-like
```
completes without errors, and
```bash
python -m py_compile src/transformers/models/auto/tokenization_auto.py
```
succeeds.
I have not added a dedicated unit test; the change touches a dev-only CLI and has been verified manually. If desired, I can add a small test in `tests/commands/`. Also, I believe there's no documentation need to change based on this modification.
No issue number exists; the bug is reproducible via the steps above.
## Before submitting
- [ ] This PR fixes a typo or improves the docs
- [x] Did you read the contributor guideline?
- [ ] Was this discussed/approved via a GitHub issue or the forum?
- [ ] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
_No necessary documentation/testcases updates are required for this change._
## Who can review?
@ArthurZucker @gante
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38703/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38702/comments | https://api.github.com/repos/huggingface/transformers/issues/38702/events | https://github.com/huggingface/transformers/issues/38702 | 3,130,541,534 | I_kwDOCUB6oc66mEXe | 38,702 | Incorrect scaling of Gemma embeddings in float32 regime | {
"login": "norpadon",
"id": 6224581,
"node_id": "MDQ6VXNlcjYyMjQ1ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6224581?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norpadon",
"html_url": "https://github.com/norpadon",
"followers_url": "https://api.github.com/users/norpadon/followers",
"following_url": "https://api.github.com/users/norpadon/following{/other_user}",
"gists_url": "https://api.github.com/users/norpadon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norpadon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norpadon/subscriptions",
"organizations_url": "https://api.github.com/users/norpadon/orgs",
"repos_url": "https://api.github.com/users/norpadon/repos",
"events_url": "https://api.github.com/users/norpadon/events{/privacy}",
"received_events_url": "https://api.github.com/users/norpadon/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-09T14:48:24 | 2025-08-11T08:03:23 | 2025-08-11T08:03:23 | CONTRIBUTOR | null | null | null | null | ### System Info
Irrelevant
### Who can help?
@ArthurZucker
Google Gemma implementation casts embedding scale to bloat16, which rounds 33.9411 to 34.0
To match this behaviour, [HF implementation](https://github.com/huggingface/transformers/blob/d7b87b415a5dd4a3152051e1a0abd098a02c5bfa/src/transformers/models/gemma3/modeling_gemma3.py#L133) does
```python
super().forward(input_ids) * self.embed_scale.to(self.weight.dtype)
```
This results in incorrect scaling behaviour if the model is loaded in float32 precision.
Relevant PR: https://github.com/huggingface/transformers/pull/29402
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
>>> hf_model = AutoModelForCausalLM.from_pretrained(
model_repo,
torch_dtype=torch.float32,
device_map="cpu",
"google/gemma-3-1b-it",
)
>>> hf_model.model.embed_tokens.embed_scale
tensor(33.9411)
### Expected behavior
```python
>>> hf_model = AutoModelForCausalLM.from_pretrained(
model_repo,
torch_dtype=torch.float32,
device_map="cpu",
"google/gemma-3-1b-it",
)
>>> hf_model.model.embed_tokens.embed_scale
tensor(34.0) | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38702/reactions",
"total_count": 4,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38702/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38701/comments | https://api.github.com/repos/huggingface/transformers/issues/38701/events | https://github.com/huggingface/transformers/pull/38701 | 3,130,321,714 | PR_kwDOCUB6oc6Zr6no | 38,701 | Update some tests for torch 2.7.1 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T13:29:49 | 2025-06-10T09:46:54 | 2025-06-10T09:46:52 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38701",
"html_url": "https://github.com/huggingface/transformers/pull/38701",
"diff_url": "https://github.com/huggingface/transformers/pull/38701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38701.patch",
"merged_at": "2025-06-10T09:46:52"
} | # What does this PR do?
Just some updates of expected outputs.
I change the dtype used in `InternVLQwen2IntegrationTest` from `bfloat16` to `float16`, as the outputs with `bfloat16` on `T4` machine is not very stable (here across torch versions), and therefore I have to update the outputs for A10 `("cuda", 8)` too.
Now all the tests are passing on T4/A10 with torch 2.6/2.7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38701/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38700/comments | https://api.github.com/repos/huggingface/transformers/issues/38700/events | https://github.com/huggingface/transformers/pull/38700 | 3,130,223,599 | PR_kwDOCUB6oc6Zrk5X | 38,700 | Small fixes amd | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T12:51:38 | 2025-06-10T14:02:22 | 2025-06-10T14:02:22 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38700",
"html_url": "https://github.com/huggingface/transformers/pull/38700",
"diff_url": "https://github.com/huggingface/transformers/pull/38700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38700.patch",
"merged_at": "2025-06-10T14:02:22"
} | This PR fixes two small things:
- `janus` decoder handling to avoid a multiple device error (will soon be merged in main, #38488 )
- `modernbert` has a `pos_idx_in_fp32` kwargs for RotaryEmbedding which is no longer an argument for init in the parent class (https://github.com/Dao-AILab/flash-attention/commit/1870a0dc0285266c83ff2effbcc2a383cc4ee8c7) | {
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38700/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38699/comments | https://api.github.com/repos/huggingface/transformers/issues/38699/events | https://github.com/huggingface/transformers/pull/38699 | 3,130,210,566 | PR_kwDOCUB6oc6Zrh-v | 38,699 | Standardize ByT5 model card format | {
"login": "yanamis",
"id": 72974057,
"node_id": "MDQ6VXNlcjcyOTc0MDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/72974057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanamis",
"html_url": "https://github.com/yanamis",
"followers_url": "https://api.github.com/users/yanamis/followers",
"following_url": "https://api.github.com/users/yanamis/following{/other_user}",
"gists_url": "https://api.github.com/users/yanamis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanamis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanamis/subscriptions",
"organizations_url": "https://api.github.com/users/yanamis/orgs",
"repos_url": "https://api.github.com/users/yanamis/repos",
"events_url": "https://api.github.com/users/yanamis/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanamis/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T12:47:05 | 2025-06-09T22:02:50 | 2025-06-09T22:02:50 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38699",
"html_url": "https://github.com/huggingface/transformers/pull/38699",
"diff_url": "https://github.com/huggingface/transformers/pull/38699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38699.patch",
"merged_at": "2025-06-09T22:02:50"
} | # What does this PR do?
This PR updates the ByT5 model card to follow the standardized format as requested in #36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38699/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38698/comments | https://api.github.com/repos/huggingface/transformers/issues/38698/events | https://github.com/huggingface/transformers/pull/38698 | 3,130,209,798 | PR_kwDOCUB6oc6Zrhz5 | 38,698 | ! Fixed device_propreties unpacking in common tests | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T12:46:46 | 2025-06-10T04:44:16 | 2025-06-10T04:44:16 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38698",
"html_url": "https://github.com/huggingface/transformers/pull/38698",
"diff_url": "https://github.com/huggingface/transformers/pull/38698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38698.patch",
"merged_at": "2025-06-10T04:44:16"
} | I added some code in a previous commit that altered `DeviceProperties`'s second element, which can now be a tuple. This broke some code outside the file it was changed in, this fixes it. | {
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38698/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38697/comments | https://api.github.com/repos/huggingface/transformers/issues/38697/events | https://github.com/huggingface/transformers/pull/38697 | 3,130,196,641 | PR_kwDOCUB6oc6Zre5n | 38,697 | Added Expectations for AMD (mostly bnb) | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T12:41:15 | 2025-06-10T04:44:31 | 2025-06-10T04:44:31 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38697",
"html_url": "https://github.com/huggingface/transformers/pull/38697",
"diff_url": "https://github.com/huggingface/transformers/pull/38697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38697.patch",
"merged_at": "2025-06-10T04:44:31"
} | This PR adds Expectations for AMD bits and bytes related tests.
cc. @mht-sharma | {
"login": "mht-sharma",
"id": 21088122,
"node_id": "MDQ6VXNlcjIxMDg4MTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/21088122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mht-sharma",
"html_url": "https://github.com/mht-sharma",
"followers_url": "https://api.github.com/users/mht-sharma/followers",
"following_url": "https://api.github.com/users/mht-sharma/following{/other_user}",
"gists_url": "https://api.github.com/users/mht-sharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mht-sharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mht-sharma/subscriptions",
"organizations_url": "https://api.github.com/users/mht-sharma/orgs",
"repos_url": "https://api.github.com/users/mht-sharma/repos",
"events_url": "https://api.github.com/users/mht-sharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/mht-sharma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38697/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38696/comments | https://api.github.com/repos/huggingface/transformers/issues/38696/events | https://github.com/huggingface/transformers/pull/38696 | 3,130,172,434 | PR_kwDOCUB6oc6ZrZhL | 38,696 | FP-Quant support | {
"login": "BlackSamorez",
"id": 16901341,
"node_id": "MDQ6VXNlcjE2OTAxMzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlackSamorez",
"html_url": "https://github.com/BlackSamorez",
"followers_url": "https://api.github.com/users/BlackSamorez/followers",
"following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}",
"gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions",
"organizations_url": "https://api.github.com/users/BlackSamorez/orgs",
"repos_url": "https://api.github.com/users/BlackSamorez/repos",
"events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlackSamorez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T12:31:24 | 2025-07-24T15:28:52 | 2025-07-23T09:41:10 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38696",
"html_url": "https://github.com/huggingface/transformers/pull/38696",
"diff_url": "https://github.com/huggingface/transformers/pull/38696.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38696.patch",
"merged_at": "2025-07-23T09:41:10"
} | # This PR adds support for the FP-Quant method.
The goal of this PR is to integrate inference and training support for the FP-Quant method that utilizes the Hadamard Transform for efficient weights+activations quantization. When using it with MXFP4 and MSE-based scaling, it implements [Quartet forward pass](https://arxiv.org/abs/2505.14669). We're also working on adding NVFP4 support and backward pass support.
Currently, we're working on the kernels [here](https://github.com/IST-DASLab/qutlass), and the integration [here](https://github.com/IST-DASLab/FP-Quant).
Installation:
1. Install `qutlass`: `git clone https://github.com/IST-DASLab/qutlass.git && cd qutlass && pip install --no-build-isolation .`
2. Install `fp_quant`: `pip install fp_quant`
Usage:
1. Use as JIT quantization from any BF16 model by passing `quantization_config=FPQuantConfig()`
2. Calibrate with GPTQ with [the repo](https://github.com/IST-DASLab/FP-Quant) with `--real_quant`.
3. Use pre-quantized models from hub: coming soon...
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38696/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38696/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38695/comments | https://api.github.com/repos/huggingface/transformers/issues/38695/events | https://github.com/huggingface/transformers/pull/38695 | 3,129,884,233 | PR_kwDOCUB6oc6ZqZik | 38,695 | [NEW MODEL] MViTV2 | {
"login": "kamila-chay",
"id": 201148875,
"node_id": "U_kgDOC_1Jyw",
"avatar_url": "https://avatars.githubusercontent.com/u/201148875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamila-chay",
"html_url": "https://github.com/kamila-chay",
"followers_url": "https://api.github.com/users/kamila-chay/followers",
"following_url": "https://api.github.com/users/kamila-chay/following{/other_user}",
"gists_url": "https://api.github.com/users/kamila-chay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamila-chay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamila-chay/subscriptions",
"organizations_url": "https://api.github.com/users/kamila-chay/orgs",
"repos_url": "https://api.github.com/users/kamila-chay/repos",
"events_url": "https://api.github.com/users/kamila-chay/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamila-chay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-09T10:30:31 | 2025-06-30T10:23:49 | 2025-06-30T10:23:49 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38695",
"html_url": "https://github.com/huggingface/transformers/pull/38695",
"diff_url": "https://github.com/huggingface/transformers/pull/38695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38695.patch",
"merged_at": null
} | # What does this PR do?
This PR introduces MViT, which is the basis for MeMViT, which is a video transformer that got requested in the issue below. I'm currently doing research in fine-grained video understanding and MeMViT happens to be able to process long video sequences. Unfortunately it's only available from the oficial FAIR repo which can be hard to integrate. That's why I decided to implement it here, but to make things cleaner and more modular I started with MViTv2 which is also a great vision transformer (mainly for image-related tasks)
Fixes [20545](https://github.com/huggingface/transformers/issues/20545)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amyeroberts @qubvel
| {
"login": "kamila-chay",
"id": 201148875,
"node_id": "U_kgDOC_1Jyw",
"avatar_url": "https://avatars.githubusercontent.com/u/201148875?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamila-chay",
"html_url": "https://github.com/kamila-chay",
"followers_url": "https://api.github.com/users/kamila-chay/followers",
"following_url": "https://api.github.com/users/kamila-chay/following{/other_user}",
"gists_url": "https://api.github.com/users/kamila-chay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamila-chay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamila-chay/subscriptions",
"organizations_url": "https://api.github.com/users/kamila-chay/orgs",
"repos_url": "https://api.github.com/users/kamila-chay/repos",
"events_url": "https://api.github.com/users/kamila-chay/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamila-chay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38695/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38694/comments | https://api.github.com/repos/huggingface/transformers/issues/38694/events | https://github.com/huggingface/transformers/pull/38694 | 3,129,845,400 | PR_kwDOCUB6oc6ZqQ7l | 38,694 | Fix some models import | {
"login": "nicelulu",
"id": 18606973,
"node_id": "MDQ6VXNlcjE4NjA2OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/18606973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicelulu",
"html_url": "https://github.com/nicelulu",
"followers_url": "https://api.github.com/users/nicelulu/followers",
"following_url": "https://api.github.com/users/nicelulu/following{/other_user}",
"gists_url": "https://api.github.com/users/nicelulu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicelulu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicelulu/subscriptions",
"organizations_url": "https://api.github.com/users/nicelulu/orgs",
"repos_url": "https://api.github.com/users/nicelulu/repos",
"events_url": "https://api.github.com/users/nicelulu/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicelulu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T10:13:36 | 2025-06-10T13:24:05 | 2025-06-09T15:09:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38694",
"html_url": "https://github.com/huggingface/transformers/pull/38694",
"diff_url": "https://github.com/huggingface/transformers/pull/38694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38694.patch",
"merged_at": "2025-06-09T15:09:25"
} | # What does this PR do?
Fix some models import.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38694/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38693/comments | https://api.github.com/repos/huggingface/transformers/issues/38693/events | https://github.com/huggingface/transformers/issues/38693 | 3,129,721,893 | I_kwDOCUB6oc66i8Ql | 38,693 | Add MulT Model from “Multimodal Transformer for Unaligned Multimodal Language Sequences" paper | {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-09T09:26:37 | 2025-06-09T15:12:02 | 2025-06-09T14:41:30 | CONTRIBUTOR | null | null | null | null | ### Model description
I’d like to contribute an implementation of the MulT model described in the paper:
**Title:** Multimodal Transformer for Unaligned Multimodal Language Sequences
**Paper:** [https://arxiv.org/pdf/1906.00295v1](https://arxiv.org/pdf/1906.00295v1)
### 💡 Motivation
MulT is a powerful approach for multimodal learning, combining textual, visual, and audio data via direct cross-modal attention. It’s not yet available in `transformers`, and I think it would be a valuable addition for researchers and developers interested in multimodal tasks.
### 🛠️ My Plan
I am currently working on implementing MulT, following the paper’s details and aligning with the `transformers` library standards.
Here’s a rough checklist of what I plan to do:
- Model architecture and layers
- Unit tests and example scripts
- Documentation
It may take me some time because I'm new to open-source contributions.
I’ll keep this issue updated with progress and would be happy to receive any suggestions!
--- | {
"login": "Vixel2006",
"id": 166058059,
"node_id": "U_kgDOCeXYSw",
"avatar_url": "https://avatars.githubusercontent.com/u/166058059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vixel2006",
"html_url": "https://github.com/Vixel2006",
"followers_url": "https://api.github.com/users/Vixel2006/followers",
"following_url": "https://api.github.com/users/Vixel2006/following{/other_user}",
"gists_url": "https://api.github.com/users/Vixel2006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vixel2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vixel2006/subscriptions",
"organizations_url": "https://api.github.com/users/Vixel2006/orgs",
"repos_url": "https://api.github.com/users/Vixel2006/repos",
"events_url": "https://api.github.com/users/Vixel2006/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vixel2006/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38693/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38692/comments | https://api.github.com/repos/huggingface/transformers/issues/38692/events | https://github.com/huggingface/transformers/issues/38692 | 3,129,501,299 | I_kwDOCUB6oc66iGZz | 38,692 | CheckpointLoaderSimple ..... Error while deserializing header: InvalidHeaderDeserialization | {
"login": "saeedafm",
"id": 215473735,
"node_id": "U_kgDODNfeRw",
"avatar_url": "https://avatars.githubusercontent.com/u/215473735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saeedafm",
"html_url": "https://github.com/saeedafm",
"followers_url": "https://api.github.com/users/saeedafm/followers",
"following_url": "https://api.github.com/users/saeedafm/following{/other_user}",
"gists_url": "https://api.github.com/users/saeedafm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saeedafm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saeedafm/subscriptions",
"organizations_url": "https://api.github.com/users/saeedafm/orgs",
"repos_url": "https://api.github.com/users/saeedafm/repos",
"events_url": "https://api.github.com/users/saeedafm/events{/privacy}",
"received_events_url": "https://api.github.com/users/saeedafm/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-09T07:54:50 | 2025-07-17T08:02:39 | 2025-07-17T08:02:39 | NONE | null | null | null | null | ### System Info
Hi guys, I just downloaded flux models and then if I Run, I received the following error:
CheckpointLoaderSimple ..... Error while deserializing header: InvalidHeaderDeserialization
# ComfyUI Error Report
## Error Details
- **Node ID:** 3
- **Node Type:** CheckpointLoaderSimple
- **Exception Type:** safetensors_rust.SafetensorError
- **Exception Message:** Error while deserializing header: InvalidHeaderDeserialization
## Stack Trace
```
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 908, in load_checkpoint_guess_config
sd, metadata = comfy.utils.load_torch_file(ckpt_path, return_metadata=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 68, in load_torch_file
raise e
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 55, in load_torch_file
with safetensors.safe_open(ckpt, framework="pt", device=device.type) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
## System Information
- **ComfyUI Version:** 0.3.29
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.6.0+cu126
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4070 Laptop GPU : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 8585216000
- **VRAM Free:** 7398752256
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
```
2025-06-09T11:03:11.343705 - [START] Security scan2025-06-09T11:03:11.343705 -
2025-06-09T11:03:12.134459 - [DONE] Security scan2025-06-09T11:03:12.134459 -
2025-06-09T11:03:12.228807 - ## ComfyUI-Manager: installing dependencies done.2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** ComfyUI startup time:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - 2025-06-09 11:03:12.2282025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** Platform:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - Windows2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** Python version:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)]2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** Python executable:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\python_embeded\python.exe2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** ComfyUI Path:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\ComfyUI2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** ComfyUI Base Folder Path:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\ComfyUI2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** User directory:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\ComfyUI\user2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** ComfyUI-Manager config path:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-06-09T11:03:12.228807 -
2025-06-09T11:03:12.228807 - ** Log path:2025-06-09T11:03:12.228807 - 2025-06-09T11:03:12.228807 - D:\Ai\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-06-09T11:03:12.228807 -
2025-06-09T11:03:13.171480 -
Prestartup times for custom nodes:
2025-06-09T11:03:13.171480 - 2.5 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-06-09T11:03:13.171480 -
2025-06-09T11:03:17.015681 - Checkpoint files will always be loaded safely.
2025-06-09T11:03:17.292874 - Total VRAM 8188 MB, total RAM 65229 MB
2025-06-09T11:03:17.292874 - pytorch version: 2.6.0+cu126
2025-06-09T11:03:17.292874 - Set vram state to: NORMAL_VRAM
2025-06-09T11:03:17.292874 - Device: cuda:0 NVIDIA GeForce RTX 4070 Laptop GPU : cudaMallocAsync
2025-06-09T11:03:20.563827 - Using pytorch attention
2025-06-09T11:03:24.102716 - Python version: 3.12.9 (tags/v3.12.9:fdb8142, Feb 4 2025, 15:27:58) [MSC v.1942 64 bit (AMD64)]
2025-06-09T11:03:24.102716 - ComfyUI version: 0.3.29
2025-06-09T11:03:24.276988 - ComfyUI frontend version: 1.16.8
2025-06-09T11:03:24.276988 - [Prompt Server] web root: D:\Ai\ComfyUI_windows_portable\python_embeded\Lib\site-packages\comfyui_frontend_package\static
2025-06-09T11:03:25.400122 - Config Export Error2025-06-09T11:03:25.400122 - [Errno 2] No such file or directory: 'XXXHOST-PATHXXX\\PATH_CFG.json'2025-06-09T11:03:25.734006 - ComfyUI-GGUF: Partial torch compile only, consider updating pytorch
2025-06-09T11:03:25.753031 - ### Loading: ComfyUI-Impact-Pack (V8.15.3)2025-06-09T11:03:25.753031 -
2025-06-09T11:03:26.107634 - [Impact Pack] Wildcards loading done.2025-06-09T11:03:26.114681 -
2025-06-09T11:03:26.122418 - ### Loading: ComfyUI-Manager (V3.32.8)
2025-06-09T11:03:26.123251 - [ComfyUI-Manager] network_mode: public
2025-06-09T11:03:26.241014 - ### ComfyUI Revision: 3347 [93292bc4] *DETACHED | Released on '2025-04-17'
2025-06-09T11:03:26.716689 - [36;20m[D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-06-09T11:03:26.716689 - [36;20m[D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-06-09T11:03:26.716689 - [36;20m[D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-06-09T11:03:27.017334 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-06-09T11:03:27.113419 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-06-09T11:03:27.135199 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-06-09T11:03:27.192743 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-06-09T11:03:27.716053 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-06-09T11:03:28.970262 - [ComfyUI-Manager] An error occurred while fetching 'https://api.comfy.org/nodes?page=1&limit=30&comfyui_version=v0.3.29&form_factor=git-windows': Expecting value: line 2 column 1 (char 1)
2025-06-09T11:03:28.970262 - Cannot connect to comfyregistry.2025-06-09T11:03:28.985997 -
2025-06-09T11:03:29.001753 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-06-09T11:03:29.001753 - 2025-06-09T11:03:29.055871 - D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
2025-06-09T11:03:29.327544 - [DONE]2025-06-09T11:03:29.327544 -
2025-06-09T11:03:29.524929 - [ComfyUI-Manager] All startup tasks have been completed.
2025-06-09T11:03:31.856243 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-06-09T11:03:31.860328 -
2025-06-09T11:03:31.865483 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-06-09T11:03:31.869930 -
2025-06-09T11:03:32.316900 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m220[0m [32mnodes successfully.[0m2025-06-09T11:03:32.316900 -
2025-06-09T11:03:32.316900 -
[3m[93m"The only limit to our realization of tomorrow will be our doubts of today."[0m[3m - Franklin D. Roosevelt[0m
2025-06-09T11:03:32.316900 -
2025-06-09T11:03:32.316900 -
Import times for custom nodes:
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ImageGallery-ED
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\Cup-ClipBoard
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-CUP
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_experiments
2025-06-09T11:03:32.316900 - 0.0 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet-main
2025-06-09T11:03:32.316900 - 0.1 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Tripo
2025-06-09T11:03:32.316900 - 0.1 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
2025-06-09T11:03:32.316900 - 0.3 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF
2025-06-09T11:03:32.332849 - 0.4 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials-main
2025-06-09T11:03:32.332849 - 0.4 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
2025-06-09T11:03:32.332849 - 0.5 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-06-09T11:03:32.332849 - 1.3 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\OneButtonPrompt
2025-06-09T11:03:32.332849 - 1.4 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui
2025-06-09T11:03:32.332849 - 2.6 seconds: D:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
2025-06-09T11:03:32.332849 -
2025-06-09T11:03:32.332849 - Starting server
2025-06-09T11:03:32.332849 - To see the GUI go to: http://127.0.0.1:8188
2025-06-09T11:03:37.650461 - got prompt
2025-06-09T11:03:38.025734 - !!! Exception during processing !!! Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:03:38.034301 - Traceback (most recent call last):
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 908, in load_checkpoint_guess_config
sd, metadata = comfy.utils.load_torch_file(ckpt_path, return_metadata=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 68, in load_torch_file
raise e
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 55, in load_torch_file
with safetensors.safe_open(ckpt, framework="pt", device=device.type) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:03:38.040510 - Prompt executed in 0.38 seconds
2025-06-09T11:04:17.790751 - got prompt
2025-06-09T11:04:17.821252 - !!! Exception during processing !!! Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:04:17.845292 - Traceback (most recent call last):
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 908, in load_checkpoint_guess_config
sd, metadata = comfy.utils.load_torch_file(ckpt_path, return_metadata=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 68, in load_torch_file
raise e
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 55, in load_torch_file
with safetensors.safe_open(ckpt, framework="pt", device=device.type) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:04:17.871446 - Prompt executed in 0.07 seconds
2025-06-09T11:21:30.198259 - got prompt
2025-06-09T11:21:30.212261 - !!! Exception during processing !!! Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:21:30.214325 - Traceback (most recent call last):
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 908, in load_checkpoint_guess_config
sd, metadata = comfy.utils.load_torch_file(ckpt_path, return_metadata=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 68, in load_torch_file
raise e
File "D:\Ai\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 55, in load_torch_file
with safetensors.safe_open(ckpt, framework="pt", device=device.type) as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization
2025-06-09T11:21:30.218271 - Prompt executed in 0.01 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"id":"00000000-0000-0000-0000-000000000000","revision":0,"last_node_id":12,"last_link_id":19,"nodes":[{"id":9,"type":"EmptyLatentImage","pos":[1117.3448486328125,362.3992919921875],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"width","name":"width","type":"INT","widget":{"name":"width"},"link":null},{"localized_name":"height","name":"height","type":"INT","widget":{"name":"height"},"link":null},{"localized_name":"batch_size","name":"batch_size","type":"INT","widget":{"name":"batch_size"},"link":null}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[9]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1]},{"id":5,"type":"CLIPTextEncode","pos":[1097.3914794921875,174.45838928222656],"size":[385.0762023925781,88],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":17},{"localized_name":"text","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","links":[19]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"CLIPTextEncode"},"widgets_values":[""],"color":"#322","bgcolor":"#533"},{"id":4,"type":"CLIPTextEncode","pos":[1095.3770751953125,-71.65526580810547],"size":[400,200],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":16},{"localized_name":"text","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","links":[18]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"CLIPTextEncode"},"widgets_values":["portrait of a woman"],"color":"#232","bgcolor":"#353"},{"id":10,"type":"VAEDecode","pos":[2219.472900390625,74.85771179199219],"size":[210,46],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"samples","name":"samples","type":"LATENT","link":10},{"localized_name":"vae","name":"vae","type":"VAE","link":12}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[11]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":11,"type":"PreviewImage","pos":[2477.2490234375,76.54154968261719],"size":[210,246],"flags":{},"order":7,"mode":0,"inputs":[{"localized_name":"images","name":"images","type":"IMAGE","link":11}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"PreviewImage"},"widgets_values":[]},{"id":7,"type":"KSampler","pos":[1767.0162353515625,-9.899660110473633],"size":[315,474],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":15},{"localized_name":"positive","name":"positive","type":"CONDITIONING","link":18},{"localized_name":"negative","name":"negative","type":"CONDITIONING","link":19},{"localized_name":"latent_image","name":"latent_image","type":"LATENT","link":9},{"localized_name":"seed","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"steps","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"cfg","name":"cfg","type":"FLOAT","widget":{"name":"cfg"},"link":null},{"localized_name":"sampler_name","name":"sampler_name","type":"COMBO","widget":{"name":"sampler_name"},"link":null},{"localized_name":"scheduler","name":"scheduler","type":"COMBO","widget":{"name":"scheduler"},"link":null},{"localized_name":"denoise","name":"denoise","type":"FLOAT","widget":{"name":"denoise"},"link":null}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[10]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"KSampler"},"widgets_values":[930925108629876,"randomize",20,1,"euler","beta",1]},{"id":12,"type":"LoraLoader","pos":[589.8250732421875,-32.128360748291016],"size":[315,126],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":13},{"localized_name":"clip","name":"clip","type":"CLIP","link":14},{"localized_name":"lora_name","name":"lora_name","type":"COMBO","widget":{"name":"lora_name"},"link":null},{"localized_name":"strength_model","name":"strength_model","type":"FLOAT","widget":{"name":"strength_model"},"link":null},{"localized_name":"strength_clip","name":"strength_clip","type":"FLOAT","widget":{"name":"strength_clip"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[15]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[16,17]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"LoraLoader"},"widgets_values":["Design\\papercut.safetensors",1,1]},{"id":3,"type":"CheckpointLoaderSimple","pos":[79.66006469726562,92.55416870117188],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"ckpt_name","name":"ckpt_name","type":"COMBO","widget":{"name":"ckpt_name"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[13]},{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[14]},{"localized_name":"VAE","name":"VAE","type":"VAE","links":[12]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.29","Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["flux1-dev-fp8.safetensors"]}],"links":[[9,9,0,7,3,"LATENT"],[10,7,0,10,0,"LATENT"],[11,10,0,11,0,"IMAGE"],[12,3,2,10,1,"VAE"],[13,3,0,12,0,"MODEL"],[14,3,1,12,1,"CLIP"],[15,12,0,7,0,"MODEL"],[16,12,1,4,0,"CLIP"],[17,12,1,5,0,"CLIP"],[18,4,0,7,1,"CONDITIONING"],[19,5,0,7,2,"CONDITIONING"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9229599817706885,"offset":[-35.688659187764245,196.63612505210955]},"frontendVersion":"1.16.8","reroutes":[{"id":10,"pos":[1739.99658203125,715.8917846679688],"linkIds":[9]},{"id":11,"parentId":10,"pos":[1732.232666015625,484.27398681640625],"linkIds":[9]},{"id":12,"pos":[1338.0205078125,-364.29925537109375],"linkIds":[15]},{"id":13,"parentId":14,"pos":[2071.25439453125,874.9921264648438],"linkIds":[12]},{"id":14,"pos":[529.2798461914062,874.7973022460938],"linkIds":[12]}],"linkExtensions":[{"id":9,"parentId":11},{"id":12,"parentId":13},{"id":15,"parentId":12}]},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My english is not very good please you fill it
thank you so much
### Expected behavior
My english is not very good please you fill it
thank you so much | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38692/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38691/comments | https://api.github.com/repos/huggingface/transformers/issues/38691/events | https://github.com/huggingface/transformers/pull/38691 | 3,129,482,332 | PR_kwDOCUB6oc6ZpBbY | 38,691 | fix(qwen3_moe): pass kwargs to self_attn | {
"login": "llllvvuu",
"id": 5601392,
"node_id": "MDQ6VXNlcjU2MDEzOTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5601392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llllvvuu",
"html_url": "https://github.com/llllvvuu",
"followers_url": "https://api.github.com/users/llllvvuu/followers",
"following_url": "https://api.github.com/users/llllvvuu/following{/other_user}",
"gists_url": "https://api.github.com/users/llllvvuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llllvvuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llllvvuu/subscriptions",
"organizations_url": "https://api.github.com/users/llllvvuu/orgs",
"repos_url": "https://api.github.com/users/llllvvuu/repos",
"events_url": "https://api.github.com/users/llllvvuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/llllvvuu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T07:46:59 | 2025-06-11T17:26:09 | 2025-06-11T17:26:08 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38691",
"html_url": "https://github.com/huggingface/transformers/pull/38691",
"diff_url": "https://github.com/huggingface/transformers/pull/38691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38691.patch",
"merged_at": "2025-06-11T17:26:08"
} | # What does this PR do?
This is needed to avoid `.item()` calls in `_flash_attention_forward`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38691/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38690/comments | https://api.github.com/repos/huggingface/transformers/issues/38690/events | https://github.com/huggingface/transformers/issues/38690 | 3,129,482,316 | I_kwDOCUB6oc66iBxM | 38,690 | [BUG] Got nan logits after mask logic refactor | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-09T07:46:58 | 2025-06-25T09:36:15 | 2025-06-23T00:53:01 | CONTRIBUTOR | null | null | null | null | ### System Info
torch 2.7.1
Regression introduced by #37866
### Who can help?
@SunMarc @cyrilzakka @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
`pip install -U autoawq`
`pip install intel_extension_for_pytorch`
python script.py
```python
import torch
from transformers import pipeline, AutoTokenizer
model_id = "hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4"
texts = ["Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun.", "I am happy today because"]
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = 'left'
if tokenizer.pad_token_id is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
pipe = pipeline("text-generation", model=model_id, device_map="cpu", torch_dtype=torch.bfloat16, tokenizer=tokenizer)
output = pipe(texts, batch_size=2)
print(output)
```
### Expected behavior
Before the regression PR:
```
[[{'generated_text': 'Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun. One day, she decided to go on a journey to the forest. She packed a small bag and set off early in the morning. As she walked, the sun rose higher in the sky and the trees grew taller.\nShe walked for a while, but the forest seemed to go on forever. She began to feel a bit scared. What if she got lost? What if she encountered wild animals? But she didn\'t want to turn back. She remembered her mother\'s words, "Courage is not the absence of fear, but rather the judgment that something else is more important than fear." She took a deep breath and continued on her journey.\nAs she walked, the trees grew closer together and the path became narrower. She had to push aside branches and fight her way through thorny vines. But she didn\'t give up. She kept going, her heart beating faster and faster.\nSuddenly, she heard a rustling in the bushes. She stopped and listened. A beautiful bird emerged from the underbrush. It was a rare species, with feathers of the most vibrant colors she had ever seen. The bird looked at her with big, round eyes and tweeted a sweet melody.\nThe little girl was amazed and delighted. She sat down on a rock, and the bird per'}], [{'generated_text': 'I am happy today because I had a great day in the kitchen. I made a delicious breakfast for my family, and it was a hit! We had scrambled eggs, bacon, and pancakes. The pancakes were a special recipe that I found online, and they were so fluffy and light. My family loved them, and they even asked for seconds.\nBut the best part of my day was making a special treat for my kids. They love when I make them a "breakfast for dinner" treat, and tonight I made them pancakes and sausage. They were so excited to have pancakes for dinner, and they loved the sausage. It was a fun twist on a classic meal.\nI am grateful for the opportunity to spend time in the kitchen and make meals for my family. It is a joy to see them enjoy the food I make, and it brings me so much happiness. I feel like I am making a difference in their lives, even if it\'s just in a small way. And that\'s what makes it all worth it.\nWhat are some of your favorite meals to make for your family? Do you have any special recipes that you like to make on occasion? I would love to hear about them! Let\'s chat in the comments below!\nI am so grateful for the blessings in my life'}]]
```
After the regression PR:
```
File "/home/jiqingfe/transformers/src/transformers/pipelines/base.py", line 1338, in forward
model_outputs = self._forward(model_inputs, **forward_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/pipelines/text_generation.py", line 400, in _forward
output = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/generation/utils.py", line 2623, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/jiqingfe/transformers/src/transformers/generation/utils.py", line 3649, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
``` | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38690/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38689/comments | https://api.github.com/repos/huggingface/transformers/issues/38689/events | https://github.com/huggingface/transformers/pull/38689 | 3,129,434,484 | PR_kwDOCUB6oc6Zo29Q | 38,689 | from 1.11.0, torchao.prototype.low_bit_optim is promoted to torchao.optim | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T07:26:35 | 2025-06-11T13:40:11 | 2025-06-11T12:16:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38689",
"html_url": "https://github.com/huggingface/transformers/pull/38689",
"diff_url": "https://github.com/huggingface/transformers/pull/38689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38689.patch",
"merged_at": "2025-06-11T12:16:26"
} | per https://github.com/pytorch/ao/tree/main/torchao/prototype, from1.11.0, `torchao.prototype.low_bit_optim` is promoted to `torchao.optim`, so will have `AttributeError: module 'torchao.prototype' has no attribute 'low_bit_optim' ` w/ torchao 1.11.0.
@ydshieh , pls help review, thx. | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38689/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38688/comments | https://api.github.com/repos/huggingface/transformers/issues/38688/events | https://github.com/huggingface/transformers/pull/38688 | 3,129,319,143 | PR_kwDOCUB6oc6ZodZb | 38,688 | Add fireflies model | {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T06:38:19 | 2025-06-09T06:52:49 | 2025-06-09T06:52:49 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38688",
"html_url": "https://github.com/huggingface/transformers/pull/38688",
"diff_url": "https://github.com/huggingface/transformers/pull/38688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38688.patch",
"merged_at": null
} | ✨What does this PR do?
This PR introduces a new transformer-based model architecture called Fireflies into the Hugging Face Transformers library.
The Fireflies model is a flexible, encoder-based architecture designed for language modeling and research experimentation. It is compatible with the Hugging Face AutoModel and AutoConfig classes, and uses the existing GPT2 tokenizer for training and inference.
Changes included:
FirefliesConfig class
FirefliesModel class
Registration of "fireflies" into CONFIG_MAPPING_NAMES and MODEL_MAPPING_NAMES
Integration with the AutoModel/AutoConfig registry system
Support for loading the model using AutoModel.from_pretrained(...)
Motivation and Context
This model is intended to provide a minimal but extensible transformer encoder block that can be trained on various datasets, suitable for small-scale experiments and educational purposes. It helps demonstrate how to register and integrate a new model into the Hugging Face ecosystem.
Fixes
N/A
Before submitting
This PR adds a new model with config and modeling files
I have read the [contributor guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)
Model is registered in AutoModel and AutoConfig mappings
The model uses an existing tokenizer (GPT2Tokenizer)
Unit tests will be added in a follow-up PR (or on request)
Who can review?
This model may be reviewed by members familiar with model integration:
Text models: @ArthurZucker
Auto modeling: @zach-huggingface
Docs & registry structure: @Rocketknight1 | {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38688/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38687/comments | https://api.github.com/repos/huggingface/transformers/issues/38687/events | https://github.com/huggingface/transformers/issues/38687 | 3,129,235,225 | I_kwDOCUB6oc66hFcZ | 38,687 | [RuntimeError: Expected all tensors to be on the same device, but found at least two devices] when fine-tuning with peft and device_map=auto | {
"login": "karoaper",
"id": 13630945,
"node_id": "MDQ6VXNlcjEzNjMwOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/13630945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karoaper",
"html_url": "https://github.com/karoaper",
"followers_url": "https://api.github.com/users/karoaper/followers",
"following_url": "https://api.github.com/users/karoaper/following{/other_user}",
"gists_url": "https://api.github.com/users/karoaper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karoaper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karoaper/subscriptions",
"organizations_url": "https://api.github.com/users/karoaper/orgs",
"repos_url": "https://api.github.com/users/karoaper/repos",
"events_url": "https://api.github.com/users/karoaper/events{/privacy}",
"received_events_url": "https://api.github.com/users/karoaper/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-09T05:56:21 | 2025-07-17T08:02:41 | 2025-07-17T08:02:41 | NONE | null | null | null | null | ### System Info
transformers version: 4.52.4
Platform: linux
Python version: 3.10.16
Accelerate version: 1.7.0
PyTorch version: 2.6.0+cu124
peft version: 0.15.2
trl version: 0.18.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from datasets import load_dataset
import copy
import torch
from peft import AutoPeftModelForCausalLM, LoraConfig, get_peft_model, prepare_model_for_kbit_training
from transformers import TrainingArguments
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from trl import SFTTrainer, SFTConfig
instruct_tune_dataset = load_dataset("mosaicml/instruct-v3")
instruct_tune_dataset = instruct_tune_dataset.filter(lambda x: x["source"] == "dolly_hhrlhf").rename_column('response','completion')
instruct_tune_dataset["train"] = instruct_tune_dataset["train"].select(range(5_000))
instruct_tune_dataset["test"] = instruct_tune_dataset["test"].select(range(200))
nf4_config = BitsAndBytesConfig(
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.1",
device_map='auto',
quantization_config=nf4_config,
use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
model = prepare_model_for_kbit_training(model)
model = get_peft_model(model, peft_config)
args = SFTConfig(
output_dir = "mistral_instruct_generation",
max_steps = 100, # comment out this line if you want to train in epochs
per_device_train_batch_size = 4,
warmup_ratio = 0.03,
logging_steps=10,
save_strategy="epoch",
eval_strategy="steps",
eval_steps=20, # comment out this line if you want to evaluate at the end of each epoch
learning_rate=2e-4,
bf16=True,
lr_scheduler_type='constant',
max_seq_length=2048,
packing=True,
completion_only_loss = True,
dataset_num_proc=1
)
trainer = MySFTTrainer(
model=model,
peft_config=peft_config,
processing_class=tokenizer,
args=args,
train_dataset=instruct_tune_dataset["train"],
eval_dataset=instruct_tune_dataset["test"]
)
trainer.train()
```
### Expected behavior
Should complete training | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38687/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38686/comments | https://api.github.com/repos/huggingface/transformers/issues/38686/events | https://github.com/huggingface/transformers/issues/38686 | 3,129,203,484 | I_kwDOCUB6oc66g9sc | 38,686 | `Trainer._save()` May Incorrectly Save Empty Model State (safetensors) | {
"login": "ChenDaiwei-99",
"id": 81737228,
"node_id": "MDQ6VXNlcjgxNzM3MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/81737228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenDaiwei-99",
"html_url": "https://github.com/ChenDaiwei-99",
"followers_url": "https://api.github.com/users/ChenDaiwei-99/followers",
"following_url": "https://api.github.com/users/ChenDaiwei-99/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenDaiwei-99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenDaiwei-99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenDaiwei-99/subscriptions",
"organizations_url": "https://api.github.com/users/ChenDaiwei-99/orgs",
"repos_url": "https://api.github.com/users/ChenDaiwei-99/repos",
"events_url": "https://api.github.com/users/ChenDaiwei-99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenDaiwei-99/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-09T05:36:59 | 2025-07-24T08:02:53 | 2025-07-24T08:02:53 | NONE | null | null | null | null | There appears to be a potential issue in the `save_model()` method of the `Trainer` class in the `Transformers` library.
When the model inherits from `PreTrainedModel`, the _save() function follows [this logic](https://github.com/huggingface/transformers/blob/10627c1a0f6877ce6715b9537afe7fafb2a89edd/src/transformers/trainer.py#L4001-L4020). However, it is possible that the `state_dict` variable is None, which can result in saving an empty state dict.
This can happen in the following case:
> `self.args.should_save` is True (as in lines [3913–3914](https://github.com/huggingface/transformers/blob/10627c1a0f6877ce6715b9537afe7fafb2a89edd/src/transformers/trainer.py#L3913-L3914))
>
> All preceding conditional branches (lines [3879–3911](https://github.com/huggingface/transformers/blob/10627c1a0f6877ce6715b9537afe7fafb2a89edd/src/transformers/trainer.py#L3879-L3911)) are false
>
> As a result, `self._save(output_dir)` is called without a state_dict, and state_dict remains None.
I suggest moving the following lines (currently in line [4002–4003](https://github.com/huggingface/transformers/blob/10627c1a0f6877ce6715b9537afe7fafb2a89edd/src/transformers/trainer.py#L4002C1-L4003C53)):
```
if state_dict is None:
state_dict = self.model.state_dict()
```
outside the if-else block to ensure state_dict is always properly loaded.
**Please let me know if this analysis is correct, and whether I can submit a PR to fix it. Thank you!**
### Reproduction
1. Define and initialize a custom model that inherits from PreTrainedModel.
2. Instantiate a Trainer with the following TrainingArguments configuration:
- load_best_model_at_end=True
- fsdp disabled (is_fsdp_enabled=False)
- deepspeed disabled (is_deepspeed_enabled=False)
3. At the end of training, when the trainer tries to load the best model, it will return error msg that weights are missing.
| {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38686/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38685/comments | https://api.github.com/repos/huggingface/transformers/issues/38685/events | https://github.com/huggingface/transformers/pull/38685 | 3,129,161,308 | PR_kwDOCUB6oc6Zn6qJ | 38,685 | Fix ImportError with DTensor by updating version check | {
"login": "Obssaa",
"id": 103986577,
"node_id": "U_kgDOBjK1kQ",
"avatar_url": "https://avatars.githubusercontent.com/u/103986577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Obssaa",
"html_url": "https://github.com/Obssaa",
"followers_url": "https://api.github.com/users/Obssaa/followers",
"following_url": "https://api.github.com/users/Obssaa/following{/other_user}",
"gists_url": "https://api.github.com/users/Obssaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Obssaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Obssaa/subscriptions",
"organizations_url": "https://api.github.com/users/Obssaa/orgs",
"repos_url": "https://api.github.com/users/Obssaa/repos",
"events_url": "https://api.github.com/users/Obssaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Obssaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T05:12:41 | 2025-06-09T13:13:05 | 2025-06-09T13:13:04 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38685",
"html_url": "https://github.com/huggingface/transformers/pull/38685",
"diff_url": "https://github.com/huggingface/transformers/pull/38685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38685.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #38639
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38685/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38684/comments | https://api.github.com/repos/huggingface/transformers/issues/38684/events | https://github.com/huggingface/transformers/pull/38684 | 3,129,123,232 | PR_kwDOCUB6oc6ZnyXm | 38,684 | Fix AssertionError when saving CodeT5+ 2B checkpoints (Resolves #38602) | {
"login": "premkiran2",
"id": 60691692,
"node_id": "MDQ6VXNlcjYwNjkxNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/60691692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/premkiran2",
"html_url": "https://github.com/premkiran2",
"followers_url": "https://api.github.com/users/premkiran2/followers",
"following_url": "https://api.github.com/users/premkiran2/following{/other_user}",
"gists_url": "https://api.github.com/users/premkiran2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/premkiran2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/premkiran2/subscriptions",
"organizations_url": "https://api.github.com/users/premkiran2/orgs",
"repos_url": "https://api.github.com/users/premkiran2/repos",
"events_url": "https://api.github.com/users/premkiran2/events{/privacy}",
"received_events_url": "https://api.github.com/users/premkiran2/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-09T04:44:59 | 2025-06-09T14:28:15 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38684",
"html_url": "https://github.com/huggingface/transformers/pull/38684",
"diff_url": "https://github.com/huggingface/transformers/pull/38684.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38684.patch",
"merged_at": null
} | This PR resolves a checkpoint saving failure during full fine-tuning of the Salesforce/codet5p-2b model | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38684/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38683/comments | https://api.github.com/repos/huggingface/transformers/issues/38683/events | https://github.com/huggingface/transformers/pull/38683 | 3,129,115,560 | PR_kwDOCUB6oc6ZnwrZ | 38,683 | Fixing the Incorrect PyTorch API Call issue with respect to the device (#38457) | {
"login": "Jahnavidarisetti",
"id": 102846075,
"node_id": "U_kgDOBiFOew",
"avatar_url": "https://avatars.githubusercontent.com/u/102846075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jahnavidarisetti",
"html_url": "https://github.com/Jahnavidarisetti",
"followers_url": "https://api.github.com/users/Jahnavidarisetti/followers",
"following_url": "https://api.github.com/users/Jahnavidarisetti/following{/other_user}",
"gists_url": "https://api.github.com/users/Jahnavidarisetti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jahnavidarisetti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jahnavidarisetti/subscriptions",
"organizations_url": "https://api.github.com/users/Jahnavidarisetti/orgs",
"repos_url": "https://api.github.com/users/Jahnavidarisetti/repos",
"events_url": "https://api.github.com/users/Jahnavidarisetti/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jahnavidarisetti/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T04:38:56 | 2025-06-09T14:27:00 | 2025-06-09T14:27:00 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38683",
"html_url": "https://github.com/huggingface/transformers/pull/38683",
"diff_url": "https://github.com/huggingface/transformers/pull/38683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38683.patch",
"merged_at": null
} | This pull request resolves the hugging face issue where an Incorrect API call from PyTorch caused an AttributeError. It stopped the Transformers library from loading the model. The solution develops a new utility function that accurately determines whether the device is cpu or cuda and also it takes current device as input when no device_map is specified.
The following criteria are fulfilled
- HuggingFaceEmbeddings initializes without raising AttributeError.
- The solution uses a valid PyTorch API.
- The device defaults to cuda if available, otherwise cpu.
- The fix is compatible with PyTorch 2.2 and Transformers 4.52.1.
- Model initialization succeeds for GLUE, SQuAD, and custom datasets. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38683/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38682/comments | https://api.github.com/repos/huggingface/transformers/issues/38682/events | https://github.com/huggingface/transformers/pull/38682 | 3,129,106,841 | PR_kwDOCUB6oc6ZnuyL | 38,682 | Fixed the handling issue (#38523) that has mismatched architecture sizes with logging and also improved testing | {
"login": "vbramhadevi",
"id": 195518401,
"node_id": "U_kgDOC6dfwQ",
"avatar_url": "https://avatars.githubusercontent.com/u/195518401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vbramhadevi",
"html_url": "https://github.com/vbramhadevi",
"followers_url": "https://api.github.com/users/vbramhadevi/followers",
"following_url": "https://api.github.com/users/vbramhadevi/following{/other_user}",
"gists_url": "https://api.github.com/users/vbramhadevi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vbramhadevi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vbramhadevi/subscriptions",
"organizations_url": "https://api.github.com/users/vbramhadevi/orgs",
"repos_url": "https://api.github.com/users/vbramhadevi/repos",
"events_url": "https://api.github.com/users/vbramhadevi/events{/privacy}",
"received_events_url": "https://api.github.com/users/vbramhadevi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T04:31:54 | 2025-06-09T14:26:03 | 2025-06-09T14:26:03 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38682",
"html_url": "https://github.com/huggingface/transformers/pull/38682",
"diff_url": "https://github.com/huggingface/transformers/pull/38682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38682.patch",
"merged_at": null
} | This pull request fixes issue https://github.com/huggingface/transformers/issues/38523 by updating the model load to work with architectures that are not the same, and makes the test cases better with Red-Green-Refactor TDD.
**Red Phase:**
Due to size mismatch the initial test case is failed when loading the model
**Green Phase:**
The model now ignores the mismatched sizes and imported the required dependencies to allow the model to load despite the mismatches to resolve the RunTime error.
**Refactor Phase:**
- Type check is used to enhance the test while validating the model instance.
- Docstring is added for documenting the tests purpose.
- Removed unused instances for cleanup.
- Updated the duplicate pytest section for consistency. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38682/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38681/comments | https://api.github.com/repos/huggingface/transformers/issues/38681/events | https://github.com/huggingface/transformers/pull/38681 | 3,129,047,412 | PR_kwDOCUB6oc6Znh3X | 38,681 | Add Fireflies model to Transformers | {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-09T03:43:30 | 2025-06-09T04:09:15 | 2025-06-09T04:09:15 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38681",
"html_url": "https://github.com/huggingface/transformers/pull/38681",
"diff_url": "https://github.com/huggingface/transformers/pull/38681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38681.patch",
"merged_at": null
} | # What does this PR do?
This PR adds a new custom Transformer-based language model architecture named **Fireflies** to the Hugging Face Transformers library.
Fireflies is a TransformerEncoder-based model designed for language modeling tasks, featuring configurable parameters such as number of layers, hidden dimensions, and attention heads.
This addition includes:
- `FirefliesConfig` to define model hyperparameters.
- `FirefliesModel` implementing the model forward pass.
- Integration with the Transformers model hub via the AutoModel mechanism.
- Example tokenizer compatibility using GPT-2 tokenizer.
- Support for training using the Trainer API with the Wikitext-2 dataset.
## Motivation and Context
This model architecture is introduced to provide users with a flexible encoder-based Transformer model alternative suitable for language modeling and research experiments. It aims to expand the model choices available in the Transformers library.
## Dependencies
- PyTorch
- Transformers >= 4.30
- Datasets
## Before submitting
- [x] This PR adds a new model and includes the required config, model, and integration files.
- [x] I have read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request).
- [x] This contribution was discussed/approved in an issue or on the forum (please add link if applicable).
- [x] Documentation has been updated accordingly.
- [x] Tests have been added for new features (can add later).
## Who can review?
This PR can be reviewed by members knowledgeable in Transformer model implementations, particularly @zach-huggingface and @ArthurZucker who often review new model additions.
| {
"login": "Arynz-C",
"id": 68093214,
"node_id": "MDQ6VXNlcjY4MDkzMjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/68093214?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arynz-C",
"html_url": "https://github.com/Arynz-C",
"followers_url": "https://api.github.com/users/Arynz-C/followers",
"following_url": "https://api.github.com/users/Arynz-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Arynz-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arynz-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arynz-C/subscriptions",
"organizations_url": "https://api.github.com/users/Arynz-C/orgs",
"repos_url": "https://api.github.com/users/Arynz-C/repos",
"events_url": "https://api.github.com/users/Arynz-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arynz-C/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38681/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38680/comments | https://api.github.com/repos/huggingface/transformers/issues/38680/events | https://github.com/huggingface/transformers/issues/38680 | 3,128,995,119 | I_kwDOCUB6oc66gK0v | 38,680 | Add support for Orthogonal Residual Updates | {
"login": "BootsofLagrangian",
"id": 125134079,
"node_id": "U_kgDOB3Vk_w",
"avatar_url": "https://avatars.githubusercontent.com/u/125134079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BootsofLagrangian",
"html_url": "https://github.com/BootsofLagrangian",
"followers_url": "https://api.github.com/users/BootsofLagrangian/followers",
"following_url": "https://api.github.com/users/BootsofLagrangian/following{/other_user}",
"gists_url": "https://api.github.com/users/BootsofLagrangian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BootsofLagrangian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BootsofLagrangian/subscriptions",
"organizations_url": "https://api.github.com/users/BootsofLagrangian/orgs",
"repos_url": "https://api.github.com/users/BootsofLagrangian/repos",
"events_url": "https://api.github.com/users/BootsofLagrangian/events{/privacy}",
"received_events_url": "https://api.github.com/users/BootsofLagrangian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-06-09T03:04:07 | 2025-06-09T14:22:28 | null | NONE | null | null | null | null | ### Feature request
This is a feature request to implement "Orthogonal Residual Update," a novel mechanism for training deep neural networks proposed in the paper "[Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks](https://arxiv.org/abs/2505.11881)".
The core idea is to modify the standard additive residual connection, which is pivotal in modern architectures like Transformers and ResNets. Instead of a standard update, $$x_{n+1} = x_{n} + f( σ(x_{n}))$$, the proposed method decomposes the module's output $$f(σ(x_{n} ))$$ into two components: one parallel to the input stream $$x_{n} $$ and one orthogonal to it. The update is then performed using only the orthogonal component.
This method has been shown to improve generalization accuracy and training stability across diverse architectures (including Vision Transformers and ResNetV2) and datasets. The implementation is computationally efficient, adding only a minimal number of FLOPs ($$O(sd)$$ in a Transformer block) compared to the main modules.
Paper: https://arxiv.org/abs/2505.11881
Original Code: https://github.com/BootsofLagrangian/ortho-residual
### Motivation
The motivation for this proposal is to address a potential inefficiency in standard residual connections. In current models, a module's learned transformation can predominantly scale or modulate the magnitude of the existing feature stream, potentially underutilizing the module's capacity for learning entirely new, complex features. This can lead to representational redundancy.
By explicitly encouraging modules to contribute novel information (i.e., new directional components orthogonal to the existing representation), the Orthogonal Residual Update mechanism provides a path toward more efficient and stable training, as well as better generalization. For instance, a ViT-B model with this update achieved a +4.3%p top-1 accuracy gain on ImageNet-1k.
Given that the principle of a linear residual stream is a core feature in most transformers models, integrating this mechanism as a configurable option could provide a significant and low-cost benefit to the community for a wide range of models and tasks.
### Your contribution
Yes, I can help by submitting a Issue. I have read the `CONTRIBUTING.MD` guide.
My proposed contribution would include:
- Implementing the core Orthogonal Residual Update logic in a reusable way that can be applied to different layers (e.g., Attention and MLP blocks).
- Adding a first concrete implementation with the OrthoViT model, including `OrthoViTConfig`, `OrthoViTModel`, and `OrthoViTForImageClassification`.
- Providing comprehensive tests for the new model and the orthogonal update logic to ensure correctness and integration with the library's testing framework.
- Adding the necessary documentation for the new model, following the library's standards.
- An OrthoViT model example : https://huggingface.co/BootsofLagrangian/ortho-vit-b-imagenet1k-hf | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38680/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/38679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38679/comments | https://api.github.com/repos/huggingface/transformers/issues/38679/events | https://github.com/huggingface/transformers/pull/38679 | 3,128,723,272 | PR_kwDOCUB6oc6Zmdue | 38,679 | Transformers Processor and Model Serialization | {
"login": "mohiuddin-khan-shiam",
"id": 147746955,
"node_id": "U_kgDOCM5wiw",
"avatar_url": "https://avatars.githubusercontent.com/u/147746955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohiuddin-khan-shiam",
"html_url": "https://github.com/mohiuddin-khan-shiam",
"followers_url": "https://api.github.com/users/mohiuddin-khan-shiam/followers",
"following_url": "https://api.github.com/users/mohiuddin-khan-shiam/following{/other_user}",
"gists_url": "https://api.github.com/users/mohiuddin-khan-shiam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mohiuddin-khan-shiam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohiuddin-khan-shiam/subscriptions",
"organizations_url": "https://api.github.com/users/mohiuddin-khan-shiam/orgs",
"repos_url": "https://api.github.com/users/mohiuddin-khan-shiam/repos",
"events_url": "https://api.github.com/users/mohiuddin-khan-shiam/events{/privacy}",
"received_events_url": "https://api.github.com/users/mohiuddin-khan-shiam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T21:54:46 | 2025-06-09T14:09:00 | 2025-06-09T14:08:59 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38679",
"html_url": "https://github.com/huggingface/transformers/pull/38679",
"diff_url": "https://github.com/huggingface/transformers/pull/38679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38679.patch",
"merged_at": null
} | This PR captures insights from a deep dive into the Hugging Face Transformers library's internal mechanisms for loading and saving processors and models.
Key areas explored include:
* `AutoProcessor` and `ProcessorMixin` for processor loading/saving (e.g., `LlavaProcessor`), including:
* Handling of individual components (tokenizers, image processors).
* Support for custom processor code.
* Serialization of chat templates.
* `_BaseAutoModelClass` (via `AutoModelForXxx`) and [PreTrainedModel](cci:2://file:///d:/Github/transformers/src/transformers/modeling_utils.py:1903:0-5583:92) for model loading/saving, covering:
* Dynamic loading of custom model code.
* Integration with PEFT adapters.
* Weight sharding for large models.
* Support for `safetensors`.
* The central role of configuration files (`config.json`, `processor_config.json`, `generation_config.json`) in driving these serialization processes.
* The overall architecture that enables modularity and extensibility, particularly for multimodal and chat-based generation scenarios. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38679/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38678/comments | https://api.github.com/repos/huggingface/transformers/issues/38678/events | https://github.com/huggingface/transformers/pull/38678 | 3,128,694,136 | PR_kwDOCUB6oc6ZmX6u | 38,678 | fix: "check out" as verb | {
"login": "DePasqualeOrg",
"id": 25420077,
"node_id": "MDQ6VXNlcjI1NDIwMDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/25420077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DePasqualeOrg",
"html_url": "https://github.com/DePasqualeOrg",
"followers_url": "https://api.github.com/users/DePasqualeOrg/followers",
"following_url": "https://api.github.com/users/DePasqualeOrg/following{/other_user}",
"gists_url": "https://api.github.com/users/DePasqualeOrg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DePasqualeOrg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DePasqualeOrg/subscriptions",
"organizations_url": "https://api.github.com/users/DePasqualeOrg/orgs",
"repos_url": "https://api.github.com/users/DePasqualeOrg/repos",
"events_url": "https://api.github.com/users/DePasqualeOrg/events{/privacy}",
"received_events_url": "https://api.github.com/users/DePasqualeOrg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T21:00:22 | 2025-06-09T14:08:08 | 2025-06-09T14:07:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38678",
"html_url": "https://github.com/huggingface/transformers/pull/38678",
"diff_url": "https://github.com/huggingface/transformers/pull/38678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38678.patch",
"merged_at": "2025-06-09T14:07:32"
} | In the context of git, "checkout" has become established as a verb. Otherwise, "checkout" in English is a noun (e.g. a store checkout), and "check out" is the correct English verb phrase.
This PR corrects instances of the latter case, while leaving any usage related to git unchanged. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38678/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38677/comments | https://api.github.com/repos/huggingface/transformers/issues/38677/events | https://github.com/huggingface/transformers/pull/38677 | 3,128,619,892 | PR_kwDOCUB6oc6ZmIyS | 38,677 | Skip some export tests on torch 2.7 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T18:59:16 | 2025-06-12T10:47:17 | 2025-06-12T10:47:16 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38677",
"html_url": "https://github.com/huggingface/transformers/pull/38677",
"diff_url": "https://github.com/huggingface/transformers/pull/38677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38677.patch",
"merged_at": "2025-06-12T10:47:16"
} | # What does this PR do?
cc @guangy10
issue opened https://github.com/pytorch/pytorch/issues/153599 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38677/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38676/comments | https://api.github.com/repos/huggingface/transformers/issues/38676/events | https://github.com/huggingface/transformers/pull/38676 | 3,128,601,217 | PR_kwDOCUB6oc6ZmE_n | 38,676 | v4.52.4, `SwissAIForTokenClassification` | {
"login": "EduardDurech",
"id": 39579228,
"node_id": "MDQ6VXNlcjM5NTc5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/39579228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardDurech",
"html_url": "https://github.com/EduardDurech",
"followers_url": "https://api.github.com/users/EduardDurech/followers",
"following_url": "https://api.github.com/users/EduardDurech/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardDurech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardDurech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardDurech/subscriptions",
"organizations_url": "https://api.github.com/users/EduardDurech/orgs",
"repos_url": "https://api.github.com/users/EduardDurech/repos",
"events_url": "https://api.github.com/users/EduardDurech/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardDurech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T18:29:39 | 2025-06-08T18:33:30 | 2025-06-08T18:33:30 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38676",
"html_url": "https://github.com/huggingface/transformers/pull/38676",
"diff_url": "https://github.com/huggingface/transformers/pull/38676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38676.patch",
"merged_at": null
} | null | {
"login": "EduardDurech",
"id": 39579228,
"node_id": "MDQ6VXNlcjM5NTc5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/39579228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EduardDurech",
"html_url": "https://github.com/EduardDurech",
"followers_url": "https://api.github.com/users/EduardDurech/followers",
"following_url": "https://api.github.com/users/EduardDurech/following{/other_user}",
"gists_url": "https://api.github.com/users/EduardDurech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EduardDurech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EduardDurech/subscriptions",
"organizations_url": "https://api.github.com/users/EduardDurech/orgs",
"repos_url": "https://api.github.com/users/EduardDurech/repos",
"events_url": "https://api.github.com/users/EduardDurech/events{/privacy}",
"received_events_url": "https://api.github.com/users/EduardDurech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38676/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38675/comments | https://api.github.com/repos/huggingface/transformers/issues/38675/events | https://github.com/huggingface/transformers/pull/38675 | 3,128,404,664 | PR_kwDOCUB6oc6ZlbmF | 38,675 | Update pegasus model card | {
"login": "dross20",
"id": 73395516,
"node_id": "MDQ6VXNlcjczMzk1NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/73395516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dross20",
"html_url": "https://github.com/dross20",
"followers_url": "https://api.github.com/users/dross20/followers",
"following_url": "https://api.github.com/users/dross20/following{/other_user}",
"gists_url": "https://api.github.com/users/dross20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dross20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dross20/subscriptions",
"organizations_url": "https://api.github.com/users/dross20/orgs",
"repos_url": "https://api.github.com/users/dross20/repos",
"events_url": "https://api.github.com/users/dross20/events{/privacy}",
"received_events_url": "https://api.github.com/users/dross20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T14:35:31 | 2025-06-11T17:56:25 | 2025-06-11T17:56:25 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38675",
"html_url": "https://github.com/huggingface/transformers/pull/38675",
"diff_url": "https://github.com/huggingface/transformers/pull/38675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38675.patch",
"merged_at": "2025-06-11T17:56:25"
} | # What does this PR do?
This PR replaces the Pegasus model card with a new model card matching the format introduced in https://github.com/huggingface/transformers/issues/36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38675/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38674/comments | https://api.github.com/repos/huggingface/transformers/issues/38674/events | https://github.com/huggingface/transformers/pull/38674 | 3,128,314,448 | PR_kwDOCUB6oc6ZlI03 | 38,674 | Fix `aya_vision` test | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T12:38:14 | 2025-06-09T20:18:54 | 2025-06-09T20:18:52 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38674",
"html_url": "https://github.com/huggingface/transformers/pull/38674",
"diff_url": "https://github.com/huggingface/transformers/pull/38674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38674.patch",
"merged_at": "2025-06-09T20:18:52"
} | # What does this PR do?
The processor tests and integration tests are never run due to an issue of `@require_read_token`.
After a fix #38093, I should had update the expected values for integration test but not done at that moment - now it is done.
For processor tests, we no longer need `@require_read_token` as the repository is changed to `hf-internal-testing/namespace-CohereForAI-repo_name_aya-vision-8b` (not including the model weights).
- currently, `require_read_token` has an issue with `@staticmethod` - I need to fix it.
All tests pass on T4/A10 torch 2.6/2.7
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38674/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38673/comments | https://api.github.com/repos/huggingface/transformers/issues/38673/events | https://github.com/huggingface/transformers/pull/38673 | 3,128,016,701 | PR_kwDOCUB6oc6ZkIXD | 38,673 | Enable Sampling with Group Beam Search by Removing Restriction on do_sample | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T07:05:47 | 2025-06-08T07:18:18 | 2025-06-08T07:18:18 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38673",
"html_url": "https://github.com/huggingface/transformers/pull/38673",
"diff_url": "https://github.com/huggingface/transformers/pull/38673.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38673.patch",
"merged_at": null
} | his PR updates the group beam search implementation to allow using sampling (do_sample=True) alongside group beam search. Previously, the configuration validation prevented enabling do_sample with group beam search by raising an error. This restriction is now removed from validate() in configuration_utils.py.
Additionally, the _group_beam_search method in generation/utils.py is updated to properly handle sampling when do_sample=True. The logic now applies warpers, computes probabilities, samples candidate tokens, and gathers the corresponding scores, aligning the sampling behavior with other generation modes.
Key changes:
Removed the error raised when do_sample is set to True with group beam search.
Added sampling logic to _group_beam_search to support probabilistic token selection.
Ensured backward compatibility for cases where sampling is not enabled.
Motivation:
Allowing sampling with group beam search increases the flexibility of text generation strategies and resolves the previous limitation. This change enables more diverse generation outputs for users who wish to combine these features. | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38673/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38672/comments | https://api.github.com/repos/huggingface/transformers/issues/38672/events | https://github.com/huggingface/transformers/pull/38672 | 3,128,006,266 | PR_kwDOCUB6oc6ZkGHi | 38,672 | AutoConfig has potential issue with composite config. #38258 solved | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-08T06:51:48 | 2025-06-09T07:31:13 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38672",
"html_url": "https://github.com/huggingface/transformers/pull/38672",
"diff_url": "https://github.com/huggingface/transformers/pull/38672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38672.patch",
"merged_at": null
} |
Fixes #38258
## Description
This PR resolves an issue where keyword arguments passed to `from_pretrained` or `from_config` for composite models were not being correctly routed to the respective sub-configs. This would lead to a `TypeError` when an argument intended for a sub-model (e.g., `use_cache=True` for a text model) was passed down to a child constructor that did not accept it.
## Solution
The solution introduces a new private static method, `_route_kwargs`, to the `_BaseAutoModelClass` in `auto/factory.py`. This centralized helper method is responsible for:
1. Iterating through the provided `kwargs`.
2. Checking if a given keyword argument is a valid attribute of any of the model's sub-configs (e.g., `text_config`, `vision_config`).
3. If a match is found, the attribute is correctly set on the corresponding sub-config object (`config.text_config.use_cache = True`).
4. The keyword argument is then removed from the main `kwargs` dictionary to prevent it from being passed down incorrectly.
This helper method is now called from the entry points of both the `from_pretrained` and `from_config` methods. This ensures that the argument routing is applied robustly and consistently, regardless of how a user chooses to load a composite model.
This approach fixes the underlying issue in the factory layer, providing a general solution for all current and future composite models.
## Testing
I have confirmed that these changes fix the issue by running the relevant tests in `tests/models/auto/test_modeling_auto_composite.py`. Additionally, all quality checks (`make quality`) pass successfully. | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38672/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/38672/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38671/comments | https://api.github.com/repos/huggingface/transformers/issues/38671/events | https://github.com/huggingface/transformers/pull/38671 | 3,128,003,108 | PR_kwDOCUB6oc6ZkFbl | 38,671 | Adding custom 3d mask into ModernBert | {
"login": "bvantuan",
"id": 37981884,
"node_id": "MDQ6VXNlcjM3OTgxODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37981884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bvantuan",
"html_url": "https://github.com/bvantuan",
"followers_url": "https://api.github.com/users/bvantuan/followers",
"following_url": "https://api.github.com/users/bvantuan/following{/other_user}",
"gists_url": "https://api.github.com/users/bvantuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bvantuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bvantuan/subscriptions",
"organizations_url": "https://api.github.com/users/bvantuan/orgs",
"repos_url": "https://api.github.com/users/bvantuan/repos",
"events_url": "https://api.github.com/users/bvantuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bvantuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-08T06:47:38 | 2025-07-29T14:31:04 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38671",
"html_url": "https://github.com/huggingface/transformers/pull/38671",
"diff_url": "https://github.com/huggingface/transformers/pull/38671.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38671.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Feature requested by #38040
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38671/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38670/comments | https://api.github.com/repos/huggingface/transformers/issues/38670/events | https://github.com/huggingface/transformers/pull/38670 | 3,127,946,194 | PR_kwDOCUB6oc6Zj5GZ | 38,670 | fix: bf16 with TPU is allowed in configuration | {
"login": "yevvonlim",
"id": 47552580,
"node_id": "MDQ6VXNlcjQ3NTUyNTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/47552580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yevvonlim",
"html_url": "https://github.com/yevvonlim",
"followers_url": "https://api.github.com/users/yevvonlim/followers",
"following_url": "https://api.github.com/users/yevvonlim/following{/other_user}",
"gists_url": "https://api.github.com/users/yevvonlim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yevvonlim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yevvonlim/subscriptions",
"organizations_url": "https://api.github.com/users/yevvonlim/orgs",
"repos_url": "https://api.github.com/users/yevvonlim/repos",
"events_url": "https://api.github.com/users/yevvonlim/events{/privacy}",
"received_events_url": "https://api.github.com/users/yevvonlim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T05:28:04 | 2025-06-11T12:35:31 | 2025-06-11T12:35:02 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38670",
"html_url": "https://github.com/huggingface/transformers/pull/38670",
"diff_url": "https://github.com/huggingface/transformers/pull/38670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38670.patch",
"merged_at": "2025-06-11T12:35:02"
} | # What does this PR do?
TPUs now support bfloat16 (bf16). But current version of transformers would raise an error `"Your setup doesn't support bf16/gpu."` if bf16 is enabled on TPU.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [] Did you write any new necessary tests? (no need)
## Who can review?
@zach-huggingface and @SunMarc
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38670/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38670/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38669/comments | https://api.github.com/repos/huggingface/transformers/issues/38669/events | https://github.com/huggingface/transformers/pull/38669 | 3,127,939,021 | PR_kwDOCUB6oc6Zj3hK | 38,669 | deci gguf support | {
"login": "ved1beta",
"id": 146507396,
"node_id": "U_kgDOCLuGhA",
"avatar_url": "https://avatars.githubusercontent.com/u/146507396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ved1beta",
"html_url": "https://github.com/ved1beta",
"followers_url": "https://api.github.com/users/ved1beta/followers",
"following_url": "https://api.github.com/users/ved1beta/following{/other_user}",
"gists_url": "https://api.github.com/users/ved1beta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ved1beta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ved1beta/subscriptions",
"organizations_url": "https://api.github.com/users/ved1beta/orgs",
"repos_url": "https://api.github.com/users/ved1beta/repos",
"events_url": "https://api.github.com/users/ved1beta/events{/privacy}",
"received_events_url": "https://api.github.com/users/ved1beta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T05:18:39 | 2025-08-26T14:28:35 | 2025-08-26T13:43:17 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38669",
"html_url": "https://github.com/huggingface/transformers/pull/38669",
"diff_url": "https://github.com/huggingface/transformers/pull/38669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38669.patch",
"merged_at": "2025-08-26T13:43:17"
} | # What does this PR do?
GGUF support for deci
Fixes #37736
## Before submitting
- [X] Was this discussed/approved via a Github issue or the Please add a link
## Who can review?
@MekkCyber
| {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38669/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38668/comments | https://api.github.com/repos/huggingface/transformers/issues/38668/events | https://github.com/huggingface/transformers/pull/38668 | 3,127,801,547 | PR_kwDOCUB6oc6ZjZ-f | 38,668 | Fix TypeError: 'NoneType' object is not iterable for esm (#38667) | {
"login": "dbleyl",
"id": 218076,
"node_id": "MDQ6VXNlcjIxODA3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/218076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbleyl",
"html_url": "https://github.com/dbleyl",
"followers_url": "https://api.github.com/users/dbleyl/followers",
"following_url": "https://api.github.com/users/dbleyl/following{/other_user}",
"gists_url": "https://api.github.com/users/dbleyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbleyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbleyl/subscriptions",
"organizations_url": "https://api.github.com/users/dbleyl/orgs",
"repos_url": "https://api.github.com/users/dbleyl/repos",
"events_url": "https://api.github.com/users/dbleyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbleyl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-08T01:54:31 | 2025-06-10T09:31:07 | 2025-06-09T15:23:20 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38668",
"html_url": "https://github.com/huggingface/transformers/pull/38668",
"diff_url": "https://github.com/huggingface/transformers/pull/38668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38668.patch",
"merged_at": "2025-06-09T15:23:20"
} | Add post_init() calls to EsmForMaskedLM, EsmForTokenClassification and EsmForSequenceClassification.
# What does this PR do?
Adds `post_init()` calls to esm models in order to prevent the error "TypeError: 'NoneType' object is not iterable".
Fixes # 38667
## Before submitting
- [:x: ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [:heavy_check_mark: ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [:x: ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [:white_check_mark: ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ :x:] Did you write any new necessary tests?
## Who can review?
@MekkCyber @Cyrilvallez since they're active in the referenced PR from the bug, or anyone familiar with the situation.
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38668/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38667/comments | https://api.github.com/repos/huggingface/transformers/issues/38667/events | https://github.com/huggingface/transformers/issues/38667 | 3,127,787,223 | I_kwDOCUB6oc66bj7X | 38,667 | TypeError: 'NoneType' object is not iterable in ESM when using DDP training | {
"login": "dbleyl",
"id": 218076,
"node_id": "MDQ6VXNlcjIxODA3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/218076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dbleyl",
"html_url": "https://github.com/dbleyl",
"followers_url": "https://api.github.com/users/dbleyl/followers",
"following_url": "https://api.github.com/users/dbleyl/following{/other_user}",
"gists_url": "https://api.github.com/users/dbleyl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dbleyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbleyl/subscriptions",
"organizations_url": "https://api.github.com/users/dbleyl/orgs",
"repos_url": "https://api.github.com/users/dbleyl/repos",
"events_url": "https://api.github.com/users/dbleyl/events{/privacy}",
"received_events_url": "https://api.github.com/users/dbleyl/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-08T01:35:54 | 2025-07-16T08:02:46 | 2025-07-16T08:02:46 | CONTRIBUTOR | null | null | null | null | ### System Info
2 machines:
- `transformers` version: 4.52.4
- Platform: Linux-6.14.10-arch1-1-x86_64-with-glibc2.41
- Python version: 3.12.10
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes, via accelerate command.
- GPU type: NVIDIA GeForce RTX 4060 Ti
- `transformers` version: 4.52.4
- Platform: Linux-6.14.10-arch1-1-x86_64-with-glibc2.41
- Python version: 3.12.10
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: ddp via accelerate
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 5090
### Who can help?
@MekkCyber @Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Just configure accelerate and load an ESM model other than `EsmModel`, such as `EsmForMaskedLM` or `EsmForSequenceClassification` using the `.from_pretrained(...)` syntax.
You will see:
```bash
transformers/modeling_utils.py", line 6031, in caching_allocator_warmup
[rank1]: re.compile("|".join([re.escape(plan) for plan in model._tp_plan]))
[rank1]: ^^^^^^^^^^^^^^
[rank1]: TypeError: 'NoneType' object is not iterable
```
The issue is that `EsmModel` calls `post_init` in `__init__` but the other three classes do not. This is the same problem in [Pull Request 37708](https://github.com/huggingface/transformers/pull/37708), which was rejected for proposing a check instead of fixing the model.
I have prepared a PR for this and ready to submit it. I believe it follows all the guidelines. I'm opening this for tracking purposes.
### Expected behavior
`tp_plan` should be initialized by `post_init` and not raise the error reported. Pull request is ready to be requested. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38667/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38666/comments | https://api.github.com/repos/huggingface/transformers/issues/38666/events | https://github.com/huggingface/transformers/pull/38666 | 3,127,610,370 | PR_kwDOCUB6oc6Ziv7z | 38,666 | docs: clarify compute_loss parameter num_items_in_batch | {
"login": "alialvii",
"id": 170040813,
"node_id": "U_kgDOCiKd7Q",
"avatar_url": "https://avatars.githubusercontent.com/u/170040813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialvii",
"html_url": "https://github.com/alialvii",
"followers_url": "https://api.github.com/users/alialvii/followers",
"following_url": "https://api.github.com/users/alialvii/following{/other_user}",
"gists_url": "https://api.github.com/users/alialvii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialvii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialvii/subscriptions",
"organizations_url": "https://api.github.com/users/alialvii/orgs",
"repos_url": "https://api.github.com/users/alialvii/repos",
"events_url": "https://api.github.com/users/alialvii/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialvii/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-07T22:34:45 | 2025-06-09T22:36:58 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38666",
"html_url": "https://github.com/huggingface/transformers/pull/38666",
"diff_url": "https://github.com/huggingface/transformers/pull/38666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38666.patch",
"merged_at": null
} | # What does this PR do?
This PR addresses [huggingface/hub-docs#1600](https://github.com/huggingface/hub-docs/issues/1600).
The issue relates to the [`Trainer.compute_loss`](https://huggingface.co/docs/transformers/v4.48.2/en/trainer) documentation, which is maintained in the `transformers` repository.
I have added a clarifying note in the "Customize" section of the Trainer documentation to mention that the `num_items_in_batch` parameter should be included when overriding `compute_loss`, as of version 4.48. This is to help users avoid potential errors.
Fixes # (issue)
Fixes: See [huggingface/hub-docs#1600](https://github.com/huggingface/hub-docs/issues/1600)
## Before submitting
- [ ✅] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- trainer: @zach-huggingface and @SunMarc
Documentation: @stevhliu
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38666/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38665/comments | https://api.github.com/repos/huggingface/transformers/issues/38665/events | https://github.com/huggingface/transformers/issues/38665 | 3,127,555,041 | I_kwDOCUB6oc66arPh | 38,665 | Exception while inference Qwen2VL and Qwen2VL, assert module.weight.shape[1] == 1 | {
"login": "iglaweb",
"id": 3032604,
"node_id": "MDQ6VXNlcjMwMzI2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3032604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iglaweb",
"html_url": "https://github.com/iglaweb",
"followers_url": "https://api.github.com/users/iglaweb/followers",
"following_url": "https://api.github.com/users/iglaweb/following{/other_user}",
"gists_url": "https://api.github.com/users/iglaweb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iglaweb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iglaweb/subscriptions",
"organizations_url": "https://api.github.com/users/iglaweb/orgs",
"repos_url": "https://api.github.com/users/iglaweb/repos",
"events_url": "https://api.github.com/users/iglaweb/events{/privacy}",
"received_events_url": "https://api.github.com/users/iglaweb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-07T21:28:40 | 2025-08-04T08:03:05 | 2025-08-04T08:03:05 | NONE | null | null | null | null | ### System Info
transformers version: 4.52.3
Platform: Linux-5.10.0-1029-oem-x86_64-with-glibc2.31
GPU device: Quadro RTX 8000
Python version: 3.10
Huggingface_hub version: 0.32.2
Safetensors version: 0.5.3
Accelerate version: 0.34.2
PyTorch version (GPU?): 2.5.0+cu124
Using distributed or parallel set-up in script?: No
### Who can help?
@zucchini-nlp
@qubvel
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I followed these tutorials:
https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_trl.ipynb
https://github.com/QwenLM/Qwen2.5-VL/blob/d2240f11656bfe404b9ba56db4e51cd09f522ff1/qwen-vl-finetune/qwenvl/train/train_qwen.py
Steps to reproduce the issue:
1. Fine-tune Qwen2VL or Qwen2.5VL (e.g. "Qwen/Qwen2.5-VL-3B-Instruct") model on custom dataset (Qlora and LoRA enabled, use cuda)
2. Run the inference on a video (use cuda).
Full log and exception:
```
- This IS expected if you are initializing Qwen2_5_VLForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Qwen2_5_VLForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Qwen2_5_VLForConditionalGeneration were not initialized from the model checkpoint at /home/user/Desktop/demo/tmp/weights_2025-05-30_13.06.42.192256_qwen_qwen2.5-vl-3b-instruct_b2_e1_vf16_fps1.0/model_qwen2vl_video-lora and are newly initialized: ['model.language_model.layers.0.self_attn.k_proj.bias', 'model.language_model.layers.0.self_attn.k_proj.weight', 'model.language_model.layers.0.self_attn.o_proj.weight', 'model.language_model.layers.0.self_attn.q_proj.bias', 'model.language_model.layers.0.self_attn.q_proj.weight', 'model.language_model.layers.0.self_attn.v_proj.bias', 'model.language_model.layers.0.self_attn.v_proj.weight', 'model.language_model.layers.1.self_attn.k_proj.bias', 'model.language_model.layers.1.self_attn.k_proj.weight', 'model.language_model.layers.1.self_attn.o_proj.weight', 'model.language_model.layers.1.self_attn.q_proj.bias', 'model.language_model.layers.1.self_attn.q_proj.weight', 'model.language_model.layers.1.self_attn.v_proj.bias', 'model.language_model.layers.1.self_attn.v_proj.weight', 'model.language_model.layers.10.self_attn.k_proj.bias', 'model.language_model.layers.10.self_attn.k_proj.weight', 'model.language_model.layers.10.self_attn.o_proj.weight', 'model.language_model.layers.10.self_attn.q_proj.bias', 'model.language_model.layers.10.self_attn.q_proj.weight', 'model.language_model.layers.10.self_attn.v_proj.bias', 'model.language_model.layers.10.self_attn.v_proj.weight', 'model.language_model.layers.11.self_attn.k_proj.bias', 'model.language_model.layers.11.self_attn.k_proj.weight', 'model.language_model.layers.11.self_attn.o_proj.weight', 'model.language_model.layers.11.self_attn.q_proj.bias', 'model.language_model.layers.11.self_attn.q_proj.weight', 'model.language_model.layers.11.self_attn.v_proj.bias', 'model.language_model.layers.11.self_attn.v_proj.weight', 'model.language_model.layers.12.self_attn.k_proj.bias', 'model.language_model.layers.12.self_attn.k_proj.weight', 'model.language_model.layers.12.self_attn.o_proj.weight', 'model.language_model.layers.12.self_attn.q_proj.bias', 'model.language_model.layers.12.self_attn.q_proj.weight', 'model.language_model.layers.12.self_attn.v_proj.bias', 'model.language_model.layers.12.self_attn.v_proj.weight', 'model.language_model.layers.13.self_attn.k_proj.bias', 'model.language_model.layers.13.self_attn.k_proj.weight', 'model.language_model.layers.13.self_attn.o_proj.weight', 'model.language_model.layers.13.self_attn.q_proj.bias', 'model.language_model.layers.13.self_attn.q_proj.weight', 'model.language_model.layers.13.self_attn.v_proj.bias', 'model.language_model.layers.13.self_attn.v_proj.weight', 'model.language_model.layers.14.self_attn.k_proj.bias', 'model.language_model.layers.14.self_attn.k_proj.weight', 'model.language_model.layers.14.self_attn.o_proj.weight', 'model.language_model.layers.14.self_attn.q_proj.bias', 'model.language_model.layers.14.self_attn.q_proj.weight', 'model.language_model.layers.14.self_attn.v_proj.bias', 'model.language_model.layers.14.self_attn.v_proj.weight', 'model.language_model.layers.15.self_attn.k_proj.bias', 'model.language_model.layers.15.self_attn.k_proj.weight', 'model.language_model.layers.15.self_attn.o_proj.weight', 'model.language_model.layers.15.self_attn.q_proj.bias', 'model.language_model.layers.15.self_attn.q_proj.weight', 'model.language_model.layers.15.self_attn.v_proj.bias', 'model.language_model.layers.15.self_attn.v_proj.weight', 'model.language_model.layers.16.self_attn.k_proj.bias', 'model.language_model.layers.16.self_attn.k_proj.weight', 'model.language_model.layers.16.self_attn.o_proj.weight', 'model.language_model.layers.16.self_attn.q_proj.bias', 'model.language_model.layers.16.self_attn.q_proj.weight', 'model.language_model.layers.16.self_attn.v_proj.bias', 'model.language_model.layers.16.self_attn.v_proj.weight', 'model.language_model.layers.17.self_attn.k_proj.bias', 'model.language_model.layers.17.self_attn.k_proj.weight', 'model.language_model.layers.17.self_attn.o_proj.weight', 'model.language_model.layers.17.self_attn.q_proj.bias', 'model.language_model.layers.17.self_attn.q_proj.weight', 'model.language_model.layers.17.self_attn.v_proj.bias', 'model.language_model.layers.17.self_attn.v_proj.weight', 'model.language_model.layers.18.self_attn.k_proj.bias', 'model.language_model.layers.18.self_attn.k_proj.weight', 'model.language_model.layers.18.self_attn.o_proj.weight', 'model.language_model.layers.18.self_attn.q_proj.bias', 'model.language_model.layers.18.self_attn.q_proj.weight', 'model.language_model.layers.18.self_attn.v_proj.bias', 'model.language_model.layers.18.self_attn.v_proj.weight', 'model.language_model.layers.19.self_attn.k_proj.bias', 'model.language_model.layers.19.self_attn.k_proj.weight', 'model.language_model.layers.19.self_attn.o_proj.weight', 'model.language_model.layers.19.self_attn.q_proj.bias', 'model.language_model.layers.19.self_attn.q_proj.weight', 'model.language_model.layers.19.self_attn.v_proj.bias', 'model.language_model.layers.19.self_attn.v_proj.weight', 'model.language_model.layers.2.self_attn.k_proj.bias', 'model.language_model.layers.2.self_attn.k_proj.weight', 'model.language_model.layers.2.self_attn.o_proj.weight', 'model.language_model.layers.2.self_attn.q_proj.bias', 'model.language_model.layers.2.self_attn.q_proj.weight', 'model.language_model.layers.2.self_attn.v_proj.bias', 'model.language_model.layers.2.self_attn.v_proj.weight', 'model.language_model.layers.20.self_attn.k_proj.bias', 'model.language_model.layers.20.self_attn.k_proj.weight', 'model.language_model.layers.20.self_attn.o_proj.weight', 'model.language_model.layers.20.self_attn.q_proj.bias', 'model.language_model.layers.20.self_attn.q_proj.weight', 'model.language_model.layers.20.self_attn.v_proj.bias', 'model.language_model.layers.20.self_attn.v_proj.weight', 'model.language_model.layers.21.self_attn.k_proj.bias', 'model.language_model.layers.21.self_attn.k_proj.weight', 'model.language_model.layers.21.self_attn.o_proj.weight', 'model.language_model.layers.21.self_attn.q_proj.bias', 'model.language_model.layers.21.self_attn.q_proj.weight', 'model.language_model.layers.21.self_attn.v_proj.bias', 'model.language_model.layers.21.self_attn.v_proj.weight', 'model.language_model.layers.22.self_attn.k_proj.bias', 'model.language_model.layers.22.self_attn.k_proj.weight', 'model.language_model.layers.22.self_attn.o_proj.weight', 'model.language_model.layers.22.self_attn.q_proj.bias', 'model.language_model.layers.22.self_attn.q_proj.weight', 'model.language_model.layers.22.self_attn.v_proj.bias', 'model.language_model.layers.22.self_attn.v_proj.weight', 'model.language_model.layers.23.self_attn.k_proj.bias', 'model.language_model.layers.23.self_attn.k_proj.weight', 'model.language_model.layers.23.self_attn.o_proj.weight', 'model.language_model.layers.23.self_attn.q_proj.bias', 'model.language_model.layers.23.self_attn.q_proj.weight', 'model.language_model.layers.23.self_attn.v_proj.bias', 'model.language_model.layers.23.self_attn.v_proj.weight', 'model.language_model.layers.24.self_attn.k_proj.bias', 'model.language_model.layers.24.self_attn.k_proj.weight', 'model.language_model.layers.24.self_attn.o_proj.weight', 'model.language_model.layers.24.self_attn.q_proj.bias', 'model.language_model.layers.24.self_attn.q_proj.weight', 'model.language_model.layers.24.self_attn.v_proj.bias', 'model.language_model.layers.24.self_attn.v_proj.weight', 'model.language_model.layers.25.self_attn.k_proj.bias', 'model.language_model.layers.25.self_attn.k_proj.weight', 'model.language_model.layers.25.self_attn.o_proj.weight', 'model.language_model.layers.25.self_attn.q_proj.bias', 'model.language_model.layers.25.self_attn.q_proj.weight', 'model.language_model.layers.25.self_attn.v_proj.bias', 'model.language_model.layers.25.self_attn.v_proj.weight', 'model.language_model.layers.26.self_attn.k_proj.bias', 'model.language_model.layers.26.self_attn.k_proj.weight', 'model.language_model.layers.26.self_attn.o_proj.weight', 'model.language_model.layers.26.self_attn.q_proj.bias', 'model.language_model.layers.26.self_attn.q_proj.weight', 'model.language_model.layers.26.self_attn.v_proj.bias', 'model.language_model.layers.26.self_attn.v_proj.weight', 'model.language_model.layers.27.self_attn.k_proj.bias', 'model.language_model.layers.27.self_attn.k_proj.weight', 'model.language_model.layers.27.self_attn.o_proj.weight', 'model.language_model.layers.27.self_attn.q_proj.bias', 'model.language_model.layers.27.self_attn.q_proj.weight', 'model.language_model.layers.27.self_attn.v_proj.bias', 'model.language_model.layers.27.self_attn.v_proj.weight', 'model.language_model.layers.28.self_attn.k_proj.bias', 'model.language_model.layers.28.self_attn.k_proj.weight', 'model.language_model.layers.28.self_attn.o_proj.weight', 'model.language_model.layers.28.self_attn.q_proj.bias', 'model.language_model.layers.28.self_attn.q_proj.weight', 'model.language_model.layers.28.self_attn.v_proj.bias', 'model.language_model.layers.28.self_attn.v_proj.weight', 'model.language_model.layers.29.self_attn.k_proj.bias', 'model.language_model.layers.29.self_attn.k_proj.weight', 'model.language_model.layers.29.self_attn.o_proj.weight', 'model.language_model.layers.29.self_attn.q_proj.bias', 'model.language_model.layers.29.self_attn.q_proj.weight', 'model.language_model.layers.29.self_attn.v_proj.bias', 'model.language_model.layers.29.self_attn.v_proj.weight', 'model.language_model.layers.3.self_attn.k_proj.bias', 'model.language_model.layers.3.self_attn.k_proj.weight', 'model.language_model.layers.3.self_attn.o_proj.weight', 'model.language_model.layers.3.self_attn.q_proj.bias', 'model.language_model.layers.3.self_attn.q_proj.weight', 'model.language_model.layers.3.self_attn.v_proj.bias', 'model.language_model.layers.3.self_attn.v_proj.weight', 'model.language_model.layers.30.self_attn.k_proj.bias', 'model.language_model.layers.30.self_attn.k_proj.weight', 'model.language_model.layers.30.self_attn.o_proj.weight', 'model.language_model.layers.30.self_attn.q_proj.bias', 'model.language_model.layers.30.self_attn.q_proj.weight', 'model.language_model.layers.30.self_attn.v_proj.bias', 'model.language_model.layers.30.self_attn.v_proj.weight', 'model.language_model.layers.31.self_attn.k_proj.bias', 'model.language_model.layers.31.self_attn.k_proj.weight', 'model.language_model.layers.31.self_attn.o_proj.weight', 'model.language_model.layers.31.self_attn.q_proj.bias', 'model.language_model.layers.31.self_attn.q_proj.weight', 'model.language_model.layers.31.self_attn.v_proj.bias', 'model.language_model.layers.31.self_attn.v_proj.weight', 'model.language_model.layers.32.self_attn.k_proj.bias', 'model.language_model.layers.32.self_attn.k_proj.weight', 'model.language_model.layers.32.self_attn.o_proj.weight', 'model.language_model.layers.32.self_attn.q_proj.bias', 'model.language_model.layers.32.self_attn.q_proj.weight', 'model.language_model.layers.32.self_attn.v_proj.bias', 'model.language_model.layers.32.self_attn.v_proj.weight', 'model.language_model.layers.33.self_attn.k_proj.bias', 'model.language_model.layers.33.self_attn.k_proj.weight', 'model.language_model.layers.33.self_attn.o_proj.weight', 'model.language_model.layers.33.self_attn.q_proj.bias', 'model.language_model.layers.33.self_attn.q_proj.weight', 'model.language_model.layers.33.self_attn.v_proj.bias', 'model.language_model.layers.33.self_attn.v_proj.weight', 'model.language_model.layers.34.self_attn.k_proj.bias', 'model.language_model.layers.34.self_attn.k_proj.weight', 'model.language_model.layers.34.self_attn.o_proj.weight', 'model.language_model.layers.34.self_attn.q_proj.bias', 'model.language_model.layers.34.self_attn.q_proj.weight', 'model.language_model.layers.34.self_attn.v_proj.bias', 'model.language_model.layers.34.self_attn.v_proj.weight', 'model.language_model.layers.35.self_attn.k_proj.bias', 'model.language_model.layers.35.self_attn.k_proj.weight', 'model.language_model.layers.35.self_attn.o_proj.weight', 'model.language_model.layers.35.self_attn.q_proj.bias', 'model.language_model.layers.35.self_attn.q_proj.weight', 'model.language_model.layers.35.self_attn.v_proj.bias', 'model.language_model.layers.35.self_attn.v_proj.weight', 'model.language_model.layers.4.self_attn.k_proj.bias', 'model.language_model.layers.4.self_attn.k_proj.weight', 'model.language_model.layers.4.self_attn.o_proj.weight', 'model.language_model.layers.4.self_attn.q_proj.bias', 'model.language_model.layers.4.self_attn.q_proj.weight', 'model.language_model.layers.4.self_attn.v_proj.bias', 'model.language_model.layers.4.self_attn.v_proj.weight', 'model.language_model.layers.5.self_attn.k_proj.bias', 'model.language_model.layers.5.self_attn.k_proj.weight', 'model.language_model.layers.5.self_attn.o_proj.weight', 'model.language_model.layers.5.self_attn.q_proj.bias', 'model.language_model.layers.5.self_attn.q_proj.weight', 'model.language_model.layers.5.self_attn.v_proj.bias', 'model.language_model.layers.5.self_attn.v_proj.weight', 'model.language_model.layers.6.self_attn.k_proj.bias', 'model.language_model.layers.6.self_attn.k_proj.weight', 'model.language_model.layers.6.self_attn.o_proj.weight', 'model.language_model.layers.6.self_attn.q_proj.bias', 'model.language_model.layers.6.self_attn.q_proj.weight', 'model.language_model.layers.6.self_attn.v_proj.bias', 'model.language_model.layers.6.self_attn.v_proj.weight', 'model.language_model.layers.7.self_attn.k_proj.bias', 'model.language_model.layers.7.self_attn.k_proj.weight', 'model.language_model.layers.7.self_attn.o_proj.weight', 'model.language_model.layers.7.self_attn.q_proj.bias', 'model.language_model.layers.7.self_attn.q_proj.weight', 'model.language_model.layers.7.self_attn.v_proj.bias', 'model.language_model.layers.7.self_attn.v_proj.weight', 'model.language_model.layers.8.self_attn.k_proj.bias', 'model.language_model.layers.8.self_attn.k_proj.weight', 'model.language_model.layers.8.self_attn.o_proj.weight', 'model.language_model.layers.8.self_attn.q_proj.bias', 'model.language_model.layers.8.self_attn.q_proj.weight', 'model.language_model.layers.8.self_attn.v_proj.bias', 'model.language_model.layers.8.self_attn.v_proj.weight', 'model.language_model.layers.9.self_attn.k_proj.bias', 'model.language_model.layers.9.self_attn.k_proj.weight', 'model.language_model.layers.9.self_attn.o_proj.weight', 'model.language_model.layers.9.self_attn.q_proj.bias', 'model.language_model.layers.9.self_attn.q_proj.weight', 'model.language_model.layers.9.self_attn.v_proj.bias', 'model.language_model.layers.9.self_attn.v_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
0%| | 0/111 [00:00<?, ?it/s]Unused or unrecognized kwargs: fps, return_tensors.
/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/bitsandbytes/nn/modules.py:354: UserWarning: FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.
warnings.warn(
0%| | 0/111 [00:16<?, ?it/s]
Traceback (most recent call last):
File "/home/user/Desktop/demo/hf_qwen_demo_video.py", line 356, in <module>
eval_videos(
File "/home/user/Desktop/demo/hf_qwen_demo_video.py", line 212, in eval_videos
pred_caption_list = run_model_preds(
File "/home/user/Desktop/demo/hf_qwen_demo_video.py", line 180, in run_model_preds
output_text = run_model_single_inference(
File "/home/user/Desktop/demo/hf_qwen_demo_video.py", line 116, in run_model_single_inference
output_ids = model.generate(**inputs, max_new_tokens=max_token_length)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/generation/utils.py", line 2597, in generate
result = self._sample(
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/generation/utils.py", line 3557, in _sample
outputs = self(**model_inputs, return_dict=True)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
outputs = self.model(
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1728, in forward
outputs = self.language_model(
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1191, in forward
layer_outputs = decoder_layer(
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1053, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 938, in forward
query_states = self.q_proj(hidden_states)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 468, in forward
fix_4bit_weight_quant_state_from_module(self)
File "/home/szhou/anaconda3/envs/my_project/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 360, in fix_4bit_weight_quant_state_from_module
assert module.weight.shape[1] == 1
AssertionError
Process finished with exit code 1
```
### Expected behavior
I expect the inference to complete without errors. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38665/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38664/comments | https://api.github.com/repos/huggingface/transformers/issues/38664/events | https://github.com/huggingface/transformers/pull/38664 | 3,127,505,864 | PR_kwDOCUB6oc6ZiYtZ | 38,664 | Fixed modeling_auto.py MODEL_FOR_MASK_GENERATION_MAPPING_NAMES variable | {
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T20:59:36 | 2025-07-06T13:23:36 | 2025-06-09T13:40:46 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38664",
"html_url": "https://github.com/huggingface/transformers/pull/38664",
"diff_url": "https://github.com/huggingface/transformers/pull/38664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38664.patch",
"merged_at": "2025-06-09T13:40:46"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/38663# (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1 @amyeroberts @qubvel
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38664/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38663/comments | https://api.github.com/repos/huggingface/transformers/issues/38663/events | https://github.com/huggingface/transformers/issues/38663 | 3,127,503,194 | I_kwDOCUB6oc66aela | 38,663 | MODEL_FOR_MASK_GENERATION_MAPPING_NAMES variable is present twice in modeling_auto.py | {
"login": "sbucaille",
"id": 24275548,
"node_id": "MDQ6VXNlcjI0Mjc1NTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbucaille",
"html_url": "https://github.com/sbucaille",
"followers_url": "https://api.github.com/users/sbucaille/followers",
"following_url": "https://api.github.com/users/sbucaille/following{/other_user}",
"gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions",
"organizations_url": "https://api.github.com/users/sbucaille/orgs",
"repos_url": "https://api.github.com/users/sbucaille/repos",
"events_url": "https://api.github.com/users/sbucaille/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbucaille/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T20:57:42 | 2025-06-09T13:40:47 | 2025-06-09T13:40:47 | CONTRIBUTOR | null | null | null | null | Hi,
Was looking around the Auto classes for some reason and noticed that ``MODEL_FOR_MASK_GENERATION_MAPPING_NAMES`` is present twice in ``modeling_auto.py``. Seems to come from this [PR](https://github.com/huggingface/transformers/pull/35147)
See here
https://github.com/huggingface/transformers/blob/ebeec13609b537f9c760292354118c9d1d63f5a0/src/transformers/models/auto/modeling_auto.py#L1529C1-L1539C2
I don't think it is intended, I'll make a PR.
Steven | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38663/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38662/comments | https://api.github.com/repos/huggingface/transformers/issues/38662/events | https://github.com/huggingface/transformers/issues/38662 | 3,127,496,547 | I_kwDOCUB6oc66ac9j | 38,662 | Whisper models appear to be broken with Flash Attention 2 | {
"login": "Anjum48",
"id": 13783303,
"node_id": "MDQ6VXNlcjEzNzgzMzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/13783303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anjum48",
"html_url": "https://github.com/Anjum48",
"followers_url": "https://api.github.com/users/Anjum48/followers",
"following_url": "https://api.github.com/users/Anjum48/following{/other_user}",
"gists_url": "https://api.github.com/users/Anjum48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anjum48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anjum48/subscriptions",
"organizations_url": "https://api.github.com/users/Anjum48/orgs",
"repos_url": "https://api.github.com/users/Anjum48/repos",
"events_url": "https://api.github.com/users/Anjum48/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anjum48/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 6470596964,
"node_id": "LA_kwDOCUB6oc8AAAABga15ZA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Audio",
"name": "Audio",
"color": "760453",
"default": false,
"description": ""
},
{
"id": 7377881103,
"node_id": "LA_kwDOCUB6oc8AAAABt8GIDw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Whisper",
"name": "Whisper",
"color": "83303E",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-06-07T20:52:22 | 2025-07-18T17:22:17 | 2025-07-18T17:22:17 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.4.5
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.1+cu128 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 5090
flash-attn: Built from main
### Who can help?
@eustlb
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Code to reproduce:
```python
device = "cuda:0" if torch.cuda.is_available() else "cpu"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
model_id = "distil-whisper/distil-large-v3.5"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id,
torch_dtype=torch_dtype,
low_cpu_mem_usage=True,
use_safetensors=True,
attn_implementation="flash_attention_2",
)
model.to(device)
processor = AutoProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=128,
torch_dtype=torch_dtype,
device=device,
return_timestamps=True
)
result = pipe(str(audio_path))
```
The audio file is long (~2 hours). Note that the model runs fine if flash attention is disabled. Have tested with various Whisper models and get the same error.
Rolling back to 4.46.1 seems to run ok, but I'm not sure flash attn is actually working. Transcription on a 5090 for some models (e.g. openai/whisper-base) is significantly slower than a 3090, but this might be due to hardware optimisations on the flash attn side...
Error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[13], line 28
15 processor = AutoProcessor.from_pretrained(model_id)
17 pipe = pipeline(
18 "automatic-speech-recognition",
19 model=model,
(...)
25 return_timestamps=True
26 )
---> 28 result = pipe(str(audio_path))
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py:295, in AutomaticSpeechRecognitionPipeline.__call__(self, inputs, **kwargs)
234 def __call__(
235 self,
236 inputs: Union[np.ndarray, bytes, str],
237 **kwargs,
238 ):
239 """
240 Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`]
241 documentation for more information.
(...)
293 `"".join(chunk["text"] for chunk in output["chunks"])`.
294 """
--> 295 return super().__call__(inputs, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py:1423, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1421 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1422 elif self.framework == "pt" and isinstance(self, ChunkPipeline):
-> 1423 return next(
1424 iter(
1425 self.get_iterator(
1426 [inputs], num_workers, batch_size, preprocess_params, forward_params, postprocess_params
1427 )
1428 )
1429 )
1430 else:
1431 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py:124, in PipelineIterator.__next__(self)
121 return self.loader_batch_item()
123 # We're out of items within a batch
--> 124 item = next(self.iterator)
125 processed = self.infer(item, **self.params)
126 # We now have a batch of "inferred things".
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py:269, in PipelinePackIterator.__next__(self)
266 return accumulator
268 while not is_last:
--> 269 processed = self.infer(next(self.iterator), **self.params)
270 if self.loader_batch_size is not None:
271 if isinstance(processed, torch.Tensor):
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py:1338, in Pipeline.forward(self, model_inputs, **forward_params)
1336 with inference_context():
1337 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
-> 1338 model_outputs = self._forward(model_inputs, **forward_params)
1339 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
1340 else:
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py:527, in AutomaticSpeechRecognitionPipeline._forward(self, model_inputs, return_timestamps, **generate_kwargs)
524 if "generation_config" not in generate_kwargs:
525 generate_kwargs["generation_config"] = self.generation_config
--> 527 tokens = self.model.generate(
528 inputs=inputs,
529 attention_mask=attention_mask,
530 **generate_kwargs,
531 )
532 # whisper longform generation stores timestamps in "segments"
533 if return_timestamps == "word" and self.type == "seq2seq_whisper":
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/generation_whisper.py:774, in WhisperGenerationMixin.generate(self, input_features, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_timestamps, task, language, is_multilingual, prompt_ids, prompt_condition_type, condition_on_prev_tokens, temperature, compression_ratio_threshold, logprob_threshold, no_speech_threshold, num_segment_frames, attention_mask, time_precision, time_precision_features, return_token_timestamps, return_segments, return_dict_in_generate, force_unique_generate_call, **kwargs)
765 proc.set_begin_index(decoder_input_ids.shape[-1])
767 # 6.6 Run generate with fallback
768 (
769 seek_sequences,
770 seek_outputs,
771 should_skip,
772 do_condition_on_prev_tokens,
773 model_output_type,
--> 774 ) = self.generate_with_fallback(
775 segment_input=segment_input,
776 decoder_input_ids=decoder_input_ids,
777 cur_bsz=cur_bsz,
778 batch_idx_map=batch_idx_map,
779 seek=seek,
780 num_segment_frames=num_segment_frames,
781 max_frames=max_frames,
782 temperatures=temperatures,
783 generation_config=generation_config,
784 logits_processor=logits_processor,
785 stopping_criteria=stopping_criteria,
786 prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
787 synced_gpus=synced_gpus,
788 return_token_timestamps=return_token_timestamps,
789 do_condition_on_prev_tokens=do_condition_on_prev_tokens,
790 is_shortform=is_shortform,
791 batch_size=batch_size,
792 attention_mask=attention_mask,
793 kwargs=kwargs,
794 )
796 # 6.7 In every generated sequence, split by timestamp tokens and extract segments
797 for i, seek_sequence in enumerate(seek_sequences):
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/generation_whisper.py:950, in WhisperGenerationMixin.generate_with_fallback(self, segment_input, decoder_input_ids, cur_bsz, batch_idx_map, seek, num_segment_frames, max_frames, temperatures, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_token_timestamps, do_condition_on_prev_tokens, is_shortform, batch_size, attention_mask, kwargs)
945 if generate_kwargs.get("encoder_outputs") is not None:
946 generate_kwargs["encoder_outputs"] = F.pad(
947 generate_kwargs["encoder_outputs"], (0, 0, 0, 0, 0, batch_size - cur_bsz), value=0
948 )
--> 950 seek_outputs = super().generate(
951 segment_input,
952 generation_config=generation_config,
953 logits_processor=logits_processor,
954 stopping_criteria=stopping_criteria,
955 prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,
956 synced_gpus=synced_gpus,
957 decoder_input_ids=decoder_input_ids,
958 attention_mask=attention_mask,
959 **generate_kwargs,
960 )
962 model_output_type = type(seek_outputs)
964 # post-process sequence tokens and outputs to be in list form
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py:2616, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, use_model_defaults, custom_generate, **kwargs)
2609 input_ids, model_kwargs = self._expand_inputs_for_generation(
2610 input_ids=input_ids,
2611 expand_size=generation_config.num_beams,
2612 is_encoder_decoder=self.config.is_encoder_decoder,
2613 **model_kwargs,
2614 )
2615 # 12. run beam sample
-> 2616 result = self._beam_search(
2617 input_ids,
2618 logits_processor=prepared_logits_processor,
2619 stopping_criteria=prepared_stopping_criteria,
2620 generation_config=generation_config,
2621 synced_gpus=synced_gpus,
2622 **model_kwargs,
2623 )
2625 elif generation_mode == GenerationMode.GROUP_BEAM_SEARCH:
2626 # 11. prepare beam search scorer
2627 beam_scorer = BeamSearchScorer(
2628 batch_size=batch_size,
2629 num_beams=generation_config.num_beams,
(...)
2635 max_length=generation_config.max_length,
2636 )
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py:4030, in GenerationMixin._beam_search(self, input_ids, logits_processor, stopping_criteria, generation_config, synced_gpus, **model_kwargs)
4027 model_inputs.update({"output_attentions": output_attentions} if output_attentions else {})
4028 model_inputs.update({"output_hidden_states": output_hidden_states} if output_hidden_states else {})
-> 4030 model_outputs = self(**model_inputs, return_dict=True)
4032 # synced_gpus: don't waste resources running the code we don't need; kwargs must be updated before skipping
4033 model_kwargs = self._update_model_kwargs_for_generation(
4034 model_outputs,
4035 model_kwargs,
4036 is_encoder_decoder=self.config.is_encoder_decoder,
4037 )
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
1765 called_always_called_hooks = set()
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/modeling_whisper.py:1694, in WhisperForConditionalGeneration.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, decoder_position_ids, labels, use_cache, output_attentions, output_hidden_states, return_dict, cache_position)
1689 if decoder_input_ids is None and decoder_inputs_embeds is None:
1690 decoder_input_ids = shift_tokens_right(
1691 labels, self.config.pad_token_id, self.config.decoder_start_token_id
1692 )
-> 1694 outputs = self.model(
1695 input_features,
1696 attention_mask=attention_mask,
1697 decoder_input_ids=decoder_input_ids,
1698 encoder_outputs=encoder_outputs,
1699 decoder_attention_mask=decoder_attention_mask,
1700 head_mask=head_mask,
1701 decoder_head_mask=decoder_head_mask,
1702 cross_attn_head_mask=cross_attn_head_mask,
1703 past_key_values=past_key_values,
1704 decoder_inputs_embeds=decoder_inputs_embeds,
1705 decoder_position_ids=decoder_position_ids,
1706 use_cache=use_cache,
1707 output_attentions=output_attentions,
1708 output_hidden_states=output_hidden_states,
1709 return_dict=return_dict,
1710 cache_position=cache_position,
1711 )
1712 lm_logits = self.proj_out(outputs[0])
1714 loss = None
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
1765 called_always_called_hooks = set()
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/modeling_whisper.py:1529, in WhisperModel.forward(self, input_features, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, decoder_position_ids, use_cache, output_attentions, output_hidden_states, return_dict, cache_position)
1522 encoder_outputs = BaseModelOutput(
1523 last_hidden_state=encoder_outputs[0],
1524 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1525 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1526 )
1528 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
-> 1529 decoder_outputs = self.decoder(
1530 input_ids=decoder_input_ids,
1531 attention_mask=decoder_attention_mask,
1532 encoder_hidden_states=encoder_outputs[0],
1533 head_mask=decoder_head_mask,
1534 cross_attn_head_mask=cross_attn_head_mask,
1535 past_key_values=past_key_values,
1536 inputs_embeds=decoder_inputs_embeds,
1537 position_ids=decoder_position_ids,
1538 use_cache=use_cache,
1539 output_attentions=output_attentions,
1540 output_hidden_states=output_hidden_states,
1541 return_dict=return_dict,
1542 cache_position=cache_position,
1543 )
1545 if not return_dict:
1546 return decoder_outputs + encoder_outputs
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
1765 called_always_called_hooks = set()
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/modeling_whisper.py:1188, in WhisperDecoder.forward(self, input_ids, attention_mask, encoder_hidden_states, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, position_ids, use_cache, output_attentions, output_hidden_states, return_dict, cache_position)
1174 layer_outputs = self._gradient_checkpointing_func(
1175 decoder_layer.__call__,
1176 hidden_states,
(...)
1185 cache_position,
1186 )
1187 else:
-> 1188 layer_outputs = decoder_layer(
1189 hidden_states,
1190 attention_mask=causal_mask,
1191 encoder_hidden_states=encoder_hidden_states,
1192 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1193 cross_attn_layer_head_mask=(
1194 cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
1195 ),
1196 past_key_value=past_key_values if use_cache else None,
1197 output_attentions=output_attentions,
1198 use_cache=use_cache,
1199 cache_position=cache_position,
1200 )
1201 hidden_states = layer_outputs[0]
1203 if output_attentions:
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
1765 called_always_called_hooks = set()
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/modeling_whisper.py:727, in WhisperDecoderLayer.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, past_key_value, output_attentions, use_cache, cache_position)
725 residual = hidden_states
726 hidden_states = self.encoder_attn_layer_norm(hidden_states)
--> 727 hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
728 hidden_states=hidden_states,
729 key_value_states=encoder_hidden_states,
730 attention_mask=encoder_attention_mask,
731 layer_head_mask=cross_attn_layer_head_mask,
732 past_key_value=past_key_value,
733 output_attentions=output_attentions,
734 )
735 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
736 hidden_states = residual + hidden_states
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1751, in Module._wrapped_call_impl(self, *args, **kwargs)
1749 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1750 else:
-> 1751 return self._call_impl(*args, **kwargs)
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py:1762, in Module._call_impl(self, *args, **kwargs)
1757 # If we don't have any hooks, we want to skip the rest of the logic in
1758 # this function, and just call forward.
1759 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1760 or _global_backward_pre_hooks or _global_backward_hooks
1761 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1762 return forward_call(*args, **kwargs)
1764 result = None
1765 called_always_called_hooks = set()
File ~/github/tapesearch/.venv/lib/python3.12/site-packages/transformers/models/whisper/modeling_whisper.py:401, in WhisperFlashAttention2.forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions, cache_position)
399 value_states = past_key_value.value_cache[self.layer_idx]
400 else:
--> 401 key_states = self.k_proj(current_states).view(bsz, tgt_len, self.num_heads, self.head_dim)
402 value_states = self.v_proj(current_states).view(bsz, tgt_len, self.num_heads, self.head_dim)
403 key_states = key_states.transpose(1, 2).contiguous()
RuntimeError: shape '[5, 3, 20, 64]' is invalid for input of size 9600000
```
### Expected behavior
Expect the pipeline to run in the same way as when flash attention is disabled (but faster!) | {
"login": "Anjum48",
"id": 13783303,
"node_id": "MDQ6VXNlcjEzNzgzMzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/13783303?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anjum48",
"html_url": "https://github.com/Anjum48",
"followers_url": "https://api.github.com/users/Anjum48/followers",
"following_url": "https://api.github.com/users/Anjum48/following{/other_user}",
"gists_url": "https://api.github.com/users/Anjum48/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anjum48/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anjum48/subscriptions",
"organizations_url": "https://api.github.com/users/Anjum48/orgs",
"repos_url": "https://api.github.com/users/Anjum48/repos",
"events_url": "https://api.github.com/users/Anjum48/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anjum48/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38662/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38662/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38661/comments | https://api.github.com/repos/huggingface/transformers/issues/38661/events | https://github.com/huggingface/transformers/pull/38661 | 3,127,358,412 | PR_kwDOCUB6oc6Zh4vF | 38,661 | fix(auto): Route kwargs correctly for composite models | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T18:32:25 | 2025-06-08T10:23:52 | 2025-06-08T10:23:52 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38661",
"html_url": "https://github.com/huggingface/transformers/pull/38661",
"diff_url": "https://github.com/huggingface/transformers/pull/38661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38661.patch",
"merged_at": null
} | <!-- This closes the linked issue when the PR is merged. -->
Fixes #38258
## Description
This PR resolves an issue where keyword arguments passed to `from_pretrained` or `from_config` for composite models were not being correctly routed to the respective sub-configs. This would lead to a `TypeError` when an argument intended for a sub-model (e.g., `use_cache=True` for a text model) was passed down to a child constructor that did not accept it.
## Solution
The solution introduces a new private static method, `_route_kwargs`, to the `_BaseAutoModelClass` in `auto/factory.py`. This centralized helper method is responsible for:
1. Iterating through the provided `kwargs`.
2. Checking if a given keyword argument is a valid attribute of any of the model's sub-configs (e.g., `text_config`, `vision_config`).
3. If a match is found, the attribute is correctly set on the corresponding sub-config object (`config.text_config.use_cache = True`).
4. The keyword argument is then removed from the main `kwargs` dictionary to prevent it from being passed down incorrectly.
This helper method is now called from the entry points of both the `from_pretrained` and `from_config` methods. This ensures that the argument routing is applied robustly and consistently, regardless of how a user chooses to load a composite model.
This approach fixes the underlying issue in the factory layer, providing a general solution for all current and future composite models.
## Testing
I have confirmed that these changes fix the issue by running the relevant tests in `tests/models/auto/test_modeling_auto_composite.py`. Additionally, all quality checks (`make quality`) pass successfully. | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38661/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38660/comments | https://api.github.com/repos/huggingface/transformers/issues/38660/events | https://github.com/huggingface/transformers/pull/38660 | 3,127,267,861 | PR_kwDOCUB6oc6ZhlGu | 38,660 | Raise `TypeError` instead of ValueError for invalid types | {
"login": "Sai-Suraj-27",
"id": 87087741,
"node_id": "MDQ6VXNlcjg3MDg3NzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/87087741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sai-Suraj-27",
"html_url": "https://github.com/Sai-Suraj-27",
"followers_url": "https://api.github.com/users/Sai-Suraj-27/followers",
"following_url": "https://api.github.com/users/Sai-Suraj-27/following{/other_user}",
"gists_url": "https://api.github.com/users/Sai-Suraj-27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sai-Suraj-27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sai-Suraj-27/subscriptions",
"organizations_url": "https://api.github.com/users/Sai-Suraj-27/orgs",
"repos_url": "https://api.github.com/users/Sai-Suraj-27/repos",
"events_url": "https://api.github.com/users/Sai-Suraj-27/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sai-Suraj-27/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T17:04:37 | 2025-07-21T12:42:25 | 2025-07-21T12:42:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38660",
"html_url": "https://github.com/huggingface/transformers/pull/38660",
"diff_url": "https://github.com/huggingface/transformers/pull/38660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38660.patch",
"merged_at": "2025-07-21T12:42:00"
} | # What does this PR do?
Fixes raising `TypeError` instead of ValueError when encountering invalid types
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @Rocketknight1 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38660/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38659/comments | https://api.github.com/repos/huggingface/transformers/issues/38659/events | https://github.com/huggingface/transformers/issues/38659 | 3,127,199,318 | I_kwDOCUB6oc66ZUZW | 38,659 | NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | {
"login": "lucavlasblom",
"id": 125964310,
"node_id": "U_kgDOB4IQFg",
"avatar_url": "https://avatars.githubusercontent.com/u/125964310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucavlasblom",
"html_url": "https://github.com/lucavlasblom",
"followers_url": "https://api.github.com/users/lucavlasblom/followers",
"following_url": "https://api.github.com/users/lucavlasblom/following{/other_user}",
"gists_url": "https://api.github.com/users/lucavlasblom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucavlasblom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucavlasblom/subscriptions",
"organizations_url": "https://api.github.com/users/lucavlasblom/orgs",
"repos_url": "https://api.github.com/users/lucavlasblom/repos",
"events_url": "https://api.github.com/users/lucavlasblom/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucavlasblom/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-07T15:52:53 | 2025-06-09T13:26:26 | 2025-06-09T13:26:24 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.52.4
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 0.32.4
- Safetensors version: 0.5.3
- Accelerate version: 1.7.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cu124 (False)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (cpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
from datasets import DatasetDict, load_dataset
data_files = {
"train": "train.txt",
"validation": "dev.txt",
"test": "test.txt"
}
dataset = load_dataset("text", data_files=data_files, split={
"train": "train",
"validation": "validation",
"test": "test"
})
NotImplementedError Traceback (most recent call last)
[<ipython-input-51-784f867eba96>](https://localhost:8080/#) in <cell line: 0>()
5 }
6
----> 7 dataset = load_dataset("conll2003", data_files=data_files, split={
8 "train": "train",
9 "validation": "validation",
1 frames
[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
### Expected behavior
I expect load_dataset to load my train, evaluate and test set that are locally stored in google colab. | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38659/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38658/comments | https://api.github.com/repos/huggingface/transformers/issues/38658/events | https://github.com/huggingface/transformers/pull/38658 | 3,127,068,744 | PR_kwDOCUB6oc6Zg5ir | 38,658 | Fix `qwen_2_5 omni` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T14:20:27 | 2025-06-12T12:43:57 | 2025-06-12T12:43:55 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38658",
"html_url": "https://github.com/huggingface/transformers/pull/38658",
"diff_url": "https://github.com/huggingface/transformers/pull/38658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38658.patch",
"merged_at": "2025-06-12T12:43:55"
} | # What does this PR do?
These never pass since the model is added.
Now pass both on A10 and T4 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38658/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38657/comments | https://api.github.com/repos/huggingface/transformers/issues/38657/events | https://github.com/huggingface/transformers/issues/38657 | 3,127,054,552 | I_kwDOCUB6oc66YxDY | 38,657 | please develop transformers java/scala sdk,eagerly to use! | {
"login": "mullerhai",
"id": 6143404,
"node_id": "MDQ6VXNlcjYxNDM0MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mullerhai",
"html_url": "https://github.com/mullerhai",
"followers_url": "https://api.github.com/users/mullerhai/followers",
"following_url": "https://api.github.com/users/mullerhai/following{/other_user}",
"gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions",
"organizations_url": "https://api.github.com/users/mullerhai/orgs",
"repos_url": "https://api.github.com/users/mullerhai/repos",
"events_url": "https://api.github.com/users/mullerhai/events{/privacy}",
"received_events_url": "https://api.github.com/users/mullerhai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-06-07T14:07:24 | 2025-06-10T02:56:28 | null | NONE | null | null | null | null | ### Feature request
Transformers have become de facto essential deep learning tools. However, you only provide interface APIs for three languages: Python, JavaScript, and Rust. Have you forgotten about Java, Scala, and Kotlin? Is it because your team lacks manpower or isn't proficient in the JVM? Currently, there are also PyTorch bindings for Java and Scala, such as javacpp - pytorch and storch. Everyone wants to use Transformers, yet you haven't overcome this obstacle for so many years. Please tell your technical director or CTO that we need Java and Scala libraries for Transformers, which can directly download model datasets and Tokenizers and perform model training. Please make sure to solve this issue this year. Thank you
### Motivation
dd
### Your contribution
dd | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38657/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38657/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/38656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38656/comments | https://api.github.com/repos/huggingface/transformers/issues/38656/events | https://github.com/huggingface/transformers/issues/38656 | 3,126,724,648 | I_kwDOCUB6oc66Xggo | 38,656 | Potential Memory Leak or Caching in Fast Image Processor | {
"login": "yhyang201",
"id": 47235274,
"node_id": "MDQ6VXNlcjQ3MjM1Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/47235274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhyang201",
"html_url": "https://github.com/yhyang201",
"followers_url": "https://api.github.com/users/yhyang201/followers",
"following_url": "https://api.github.com/users/yhyang201/following{/other_user}",
"gists_url": "https://api.github.com/users/yhyang201/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhyang201/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhyang201/subscriptions",
"organizations_url": "https://api.github.com/users/yhyang201/orgs",
"repos_url": "https://api.github.com/users/yhyang201/repos",
"events_url": "https://api.github.com/users/yhyang201/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhyang201/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-07T08:46:48 | 2025-08-12T13:02:37 | 2025-08-12T13:02:36 | NONE | null | null | null | null | ### System Info
Hi team,
Thank you for your great work on `transformers`!
While using the `AutoProcessor` with `use_fast=True`, I noticed that there seems to be a memory leak or possibly some form of persistent caching when processing images. Even after deleting the processor and clearing the CUDA cache, approximately 600MB of GPU memory remains occupied.
Here is a minimal reproducible example:
```python
from transformers import AutoProcessor
from PIL import Image
import time
import torch
import requests
from io import BytesIO
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2.5-VL-7B-Instruct",
use_fast=True,
trust_remote_code=False,
revision=None,
)
url = "https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true"
response = requests.get(url)
images = [Image.open(BytesIO(response.content)).convert("RGB")]
result = processor(
text=[
"<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n"
"<|im_start|>user\nWhat’s in this image?<|vision_start|><|image_pad|><|vision_end|><|im_end|>\n"
"<|im_start|>assistant\n"
],
padding=True,
return_tensors="pt",
images=images,
device="cuda"
)
del result
del processor
torch.cuda.empty_cache()
print("You can now use nvidia-smi to observe GPU memory usage, which is around 600MB.")
while True:
time.sleep(60)
```
I’d like to kindly ask:
1. If this is due to caching, is there a way to control or disable the cache?
2. If this is an unintended memory leak, would it be possible to investigate and potentially fix it?
Thanks again for your help and time!
Best regards
### Who can help?
tokenizers: @ArthurZucker and @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
As provided above.
### Expected behavior
It would be great if caching could be made optional, or if there could be an option to avoid any GPU memory usage entirely. | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38656/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38656/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38655/comments | https://api.github.com/repos/huggingface/transformers/issues/38655/events | https://github.com/huggingface/transformers/pull/38655 | 3,126,707,151 | PR_kwDOCUB6oc6Zfr9Y | 38,655 | minor docstring fixups | {
"login": "davidjsonn",
"id": 155117116,
"node_id": "U_kgDOCT7mPA",
"avatar_url": "https://avatars.githubusercontent.com/u/155117116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidjsonn",
"html_url": "https://github.com/davidjsonn",
"followers_url": "https://api.github.com/users/davidjsonn/followers",
"following_url": "https://api.github.com/users/davidjsonn/following{/other_user}",
"gists_url": "https://api.github.com/users/davidjsonn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidjsonn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidjsonn/subscriptions",
"organizations_url": "https://api.github.com/users/davidjsonn/orgs",
"repos_url": "https://api.github.com/users/davidjsonn/repos",
"events_url": "https://api.github.com/users/davidjsonn/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidjsonn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-07T08:25:43 | 2025-06-18T09:13:48 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38655",
"html_url": "https://github.com/huggingface/transformers/pull/38655",
"diff_url": "https://github.com/huggingface/transformers/pull/38655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38655.patch",
"merged_at": null
} | Hey! Fixed errors:
src/transformers/masking_utils.py
`necesary` - `necessary` x2
src/transformers/model_debugging_utils.py
`spearate` - `separate`
src/transformers/video_processing_utils.py
`successfull` - `successful` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38655/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38654/comments | https://api.github.com/repos/huggingface/transformers/issues/38654/events | https://github.com/huggingface/transformers/issues/38654 | 3,126,700,141 | I_kwDOCUB6oc66Xaht | 38,654 | The visualization of image input in Qwen2.5-VL | {
"login": "Bytes-Lin",
"id": 73384757,
"node_id": "MDQ6VXNlcjczMzg0NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/73384757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bytes-Lin",
"html_url": "https://github.com/Bytes-Lin",
"followers_url": "https://api.github.com/users/Bytes-Lin/followers",
"following_url": "https://api.github.com/users/Bytes-Lin/following{/other_user}",
"gists_url": "https://api.github.com/users/Bytes-Lin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bytes-Lin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bytes-Lin/subscriptions",
"organizations_url": "https://api.github.com/users/Bytes-Lin/orgs",
"repos_url": "https://api.github.com/users/Bytes-Lin/repos",
"events_url": "https://api.github.com/users/Bytes-Lin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bytes-Lin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T08:15:44 | 2025-06-10T09:04:04 | 2025-06-10T09:04:04 | NONE | null | null | null | null | The image input of Qwen2.5-VL is processed by processor and then saved as tensor in inputs['pixel_values'].
I tried to restore the image, using tensor in inputs['pixel_values'], but I found that the restored image patches were in disorder.
So how to restore the image from inputs['pixel_values'] in a proper way?
For example, the origin input image is as follows.

And failed to restore from the inputs['pixel_values'].
 | {
"login": "Bytes-Lin",
"id": 73384757,
"node_id": "MDQ6VXNlcjczMzg0NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/73384757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bytes-Lin",
"html_url": "https://github.com/Bytes-Lin",
"followers_url": "https://api.github.com/users/Bytes-Lin/followers",
"following_url": "https://api.github.com/users/Bytes-Lin/following{/other_user}",
"gists_url": "https://api.github.com/users/Bytes-Lin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bytes-Lin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bytes-Lin/subscriptions",
"organizations_url": "https://api.github.com/users/Bytes-Lin/orgs",
"repos_url": "https://api.github.com/users/Bytes-Lin/repos",
"events_url": "https://api.github.com/users/Bytes-Lin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bytes-Lin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38654/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38653/comments | https://api.github.com/repos/huggingface/transformers/issues/38653/events | https://github.com/huggingface/transformers/pull/38653 | 3,126,423,043 | PR_kwDOCUB6oc6ZevU3 | 38,653 | Add sampling support to group beam search | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-07T04:17:07 | 2025-06-09T13:16:15 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38653",
"html_url": "https://github.com/huggingface/transformers/pull/38653",
"diff_url": "https://github.com/huggingface/transformers/pull/38653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38653.patch",
"merged_at": null
} | Fixes #38268
### Feature Description
This PR implements the feature request to add sampling capabilities (e.g., Top-K, Top-P, temperature) to Group Beam Search, which was previously a purely greedy algorithm.
### Problem
Currently, `_group_beam_search` in the `GenerationMixin` is implemented as a deterministic process. After applying the diversity penalty to the logits, it always selects the highest-probability tokens using `torch.topk`. This prevents users from leveraging the creative and diverse outputs that stochastic sampling methods provide, which is especially useful for tasks like biological sequence or code generation.
### Solution
This implementation modifies `_group_beam_search` by adding a conditional path that is triggered when `generation_config.do_sample=True`. The new sampling path includes the following logic:
1. **Applies All Processors & Warpers:** It correctly applies all `LogitsProcessor`s (including the `ForcedDiversityLogitsProcessor`) and then applies the `LogitsWarper`s (for Temperature, Top-K, Top-P) to the scores.
2. **Safe Candidate Selection:** It safely calculates the number of candidates to sample by taking the `min()` of what the `beam_scorer` requires and the number of tokens available after warping, preventing potential `torch.multinomial` errors.
3. **Stochastic Sampling:** It uses `torch.multinomial` to stochastically sample candidate tokens from the resulting probability distribution.
4. **Score Gathering:** It gathers the log-scores of the sampled tokens to ensure compatibility with the rest of the beam search algorithm.
Additionally, the validation check in `generation/configuration_utils.py` that previously raised a `ValueError` for `do_sample=True` with group beam search has been removed to enable this new feature.
### Testing
The feature has been tested locally by running `model.generate` with `do_sample=True` and various sampling parameters (`temperature`, `top_k`, `top_p`). The tests confirm that:
1. The code runs without errors.
2. The generated output is stochastic and differs from the deterministic greedy output.
3. The generated output changes on subsequent runs, confirming that sampling is active.
--- Generating with Greedy Group Beam Search ---
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Greedy Outputs:
0: The best way to learn about large language models is to learn about the language.
1: The best way to learn about large language models is to learn about the language.
2: The best way to learn about large language models is to look at a few examples of how to use them.
The first step is to look at a few examples of how to use them. The second step is to look at a few examples of how to use them. The third step
3: The best way to learn about large language models is to look at a few examples of how to use them.
The first step is to look at a few examples of how to use them. The first step is to look at a few examples of how to use them. The second step
--- Generating with Sampling Group Beam Search ---
Sampling Outputs:
0: The best way to learn about large language models is to look at a few simple examples of how a language can be used to understand languages. For example, a simple example can look at a simple example of how a language can be used to understand languages. For example, a simple example can look at
1: The best way to learn about large language models is to find new ways to work around them.
When you start making your own languages, you should be careful not to think about how you are doing this.
You should not be just focusing on the tools that you
2: The best way to learn about large language models is to look at a few simple examples of how a language can be used to understand languages. For example, a simple example can look at a simple example of how an interpreter can be used to understand languages.
3: The best way to learn about large language models is to find new ways to work around them.
When you start making your own languages, you should be careful not to think about how you are doing this.
You should not be just focusing on the tools that are
--- Generating with MORE RANDOM Sampling Group Beam Search ---
More Random Sampling Outputs:
0: The best way to learn about large language models is through the research paper published in Psychological Science .
I was so lucky (because everyone else I was lucky to be with has been with) that it was very helpful to do some small tasks so I couldn't stop feeling inspired by
1: The best way to learn about large language models is through the research paper published in Psychological Science .
I was so lucky (because everyone else I was lucky to be with has been with) that it was very helpful to do some small tasks so I couldn't stop feeling very good
2: The best way to learn about large language models is to take a look at a few simple language modeling tutorials you can be sure you’ll learn a lot about the languages and frameworks that you’ll be used to writing them. These tutorials can be found in The Cucumber, Python
3: The best way to learn about large language models is to take a look at a few simple language modeling tutorials you can be sure you’ll learn a lot about the languages and frameworks that you’ll be used to writing them. These tutorials can be found in The A Language Modeler blog | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38653/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38652/comments | https://api.github.com/repos/huggingface/transformers/issues/38652/events | https://github.com/huggingface/transformers/pull/38652 | 3,126,139,509 | PR_kwDOCUB6oc6Zdwtt | 38,652 | Fix typo in Language Modeling example scripts and update TPU type | {
"login": "framoncg",
"id": 92894661,
"node_id": "U_kgDOBYl1xQ",
"avatar_url": "https://avatars.githubusercontent.com/u/92894661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/framoncg",
"html_url": "https://github.com/framoncg",
"followers_url": "https://api.github.com/users/framoncg/followers",
"following_url": "https://api.github.com/users/framoncg/following{/other_user}",
"gists_url": "https://api.github.com/users/framoncg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/framoncg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/framoncg/subscriptions",
"organizations_url": "https://api.github.com/users/framoncg/orgs",
"repos_url": "https://api.github.com/users/framoncg/repos",
"events_url": "https://api.github.com/users/framoncg/events{/privacy}",
"received_events_url": "https://api.github.com/users/framoncg/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-07T00:06:29 | 2025-06-10T13:44:07 | 2025-06-10T13:43:36 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38652",
"html_url": "https://github.com/huggingface/transformers/pull/38652",
"diff_url": "https://github.com/huggingface/transformers/pull/38652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38652.patch",
"merged_at": "2025-06-10T13:43:35"
} | # What does this PR do?
It fixes a couple of typos under the Language Modeling Pytorch examples directory that prevented the example to be run successfully. It also changes the DsitributedType in the accelerator module from TPU to TP according to the accelerate [repository](https://github.com/huggingface/accelerate/blob/682691deaca2637e0a2efeaa5ebb6dd8bade8c30/src/accelerate/utils/dataclasses.py#L585C4-L585C13). This changes allow the examples to be run as they are suggested for testing in the README file
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38652/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38651/comments | https://api.github.com/repos/huggingface/transformers/issues/38651/events | https://github.com/huggingface/transformers/pull/38651 | 3,125,799,605 | PR_kwDOCUB6oc6ZclK6 | 38,651 | Docs: update bitsandbytes torch.compile compatibility | {
"login": "matthewdouglas",
"id": 38992547,
"node_id": "MDQ6VXNlcjM4OTkyNTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/38992547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewdouglas",
"html_url": "https://github.com/matthewdouglas",
"followers_url": "https://api.github.com/users/matthewdouglas/followers",
"following_url": "https://api.github.com/users/matthewdouglas/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewdouglas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewdouglas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewdouglas/subscriptions",
"organizations_url": "https://api.github.com/users/matthewdouglas/orgs",
"repos_url": "https://api.github.com/users/matthewdouglas/repos",
"events_url": "https://api.github.com/users/matthewdouglas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewdouglas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T20:29:19 | 2025-06-09T18:51:59 | 2025-06-09T18:51:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38651",
"html_url": "https://github.com/huggingface/transformers/pull/38651",
"diff_url": "https://github.com/huggingface/transformers/pull/38651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38651.patch",
"merged_at": "2025-06-09T18:51:57"
} | # What does this PR do?
Updates the quantization overview documentation to indicate that bitsandbytes is now compatible with `torch.compile`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@SunMarc @stevhliu | {
"login": "matthewdouglas",
"id": 38992547,
"node_id": "MDQ6VXNlcjM4OTkyNTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/38992547?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewdouglas",
"html_url": "https://github.com/matthewdouglas",
"followers_url": "https://api.github.com/users/matthewdouglas/followers",
"following_url": "https://api.github.com/users/matthewdouglas/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewdouglas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewdouglas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewdouglas/subscriptions",
"organizations_url": "https://api.github.com/users/matthewdouglas/orgs",
"repos_url": "https://api.github.com/users/matthewdouglas/repos",
"events_url": "https://api.github.com/users/matthewdouglas/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewdouglas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38651/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38650/comments | https://api.github.com/repos/huggingface/transformers/issues/38650/events | https://github.com/huggingface/transformers/issues/38650 | 3,125,768,141 | I_kwDOCUB6oc66T2_N | 38,650 | Support of Qwen3 GGUF model | {
"login": "Auth0rM0rgan",
"id": 22752107,
"node_id": "MDQ6VXNlcjIyNzUyMTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/22752107?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Auth0rM0rgan",
"html_url": "https://github.com/Auth0rM0rgan",
"followers_url": "https://api.github.com/users/Auth0rM0rgan/followers",
"following_url": "https://api.github.com/users/Auth0rM0rgan/following{/other_user}",
"gists_url": "https://api.github.com/users/Auth0rM0rgan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Auth0rM0rgan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Auth0rM0rgan/subscriptions",
"organizations_url": "https://api.github.com/users/Auth0rM0rgan/orgs",
"repos_url": "https://api.github.com/users/Auth0rM0rgan/repos",
"events_url": "https://api.github.com/users/Auth0rM0rgan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Auth0rM0rgan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T20:11:23 | 2025-07-15T08:02:59 | 2025-07-15T08:02:59 | NONE | null | null | null | null | Hi, I am getting the following error when I want to use the GGUF model with Qwen3
"ValueError: GGUF model with architecture qwen3 is not supported yet."
I have the latest transformers and gguf-0.17.0
```
self.tokenizer = AutoTokenizer.from_pretrained(model_name, gguf_file= "Qwen3-0.6B-Q2_K_L.gguf",use_fast=True)
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = "<pad>"
self.tokenizer.add_special_tokens({"pad_token": "<pad>"})
self.tokenizer.padding_side = "left"
self.model = AutoModelForCausalLM.from_pretrained(
model_name,
gguf_file = "Qwen3-0.6B-Q2_K_L.gguf",
pad_token_id=self.tokenizer.pad_token_id,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
```
How can I use the gguf model of Qwen3 with transformers? Could you please add the support of it?
Thanks! | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38650/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38650/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38649/comments | https://api.github.com/repos/huggingface/transformers/issues/38649/events | https://github.com/huggingface/transformers/pull/38649 | 3,125,675,086 | PR_kwDOCUB6oc6ZcJkU | 38,649 | Add Qwen2 MoE model card | {
"login": "rileyafox",
"id": 41808064,
"node_id": "MDQ6VXNlcjQxODA4MDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/41808064?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rileyafox",
"html_url": "https://github.com/rileyafox",
"followers_url": "https://api.github.com/users/rileyafox/followers",
"following_url": "https://api.github.com/users/rileyafox/following{/other_user}",
"gists_url": "https://api.github.com/users/rileyafox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rileyafox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rileyafox/subscriptions",
"organizations_url": "https://api.github.com/users/rileyafox/orgs",
"repos_url": "https://api.github.com/users/rileyafox/repos",
"events_url": "https://api.github.com/users/rileyafox/events{/privacy}",
"received_events_url": "https://api.github.com/users/rileyafox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T19:24:53 | 2025-06-11T22:14:02 | 2025-06-11T22:14:01 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38649",
"html_url": "https://github.com/huggingface/transformers/pull/38649",
"diff_url": "https://github.com/huggingface/transformers/pull/38649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38649.patch",
"merged_at": "2025-06-11T22:14:01"
} | # What does this PR do?
Refactored the Qwen2MoE model card to match the template
## Before submitting
- This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38649/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38648/comments | https://api.github.com/repos/huggingface/transformers/issues/38648/events | https://github.com/huggingface/transformers/pull/38648 | 3,125,497,696 | PR_kwDOCUB6oc6ZbiOs | 38,648 | Add sampling support to group beam search | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T18:06:20 | 2025-06-07T04:18:07 | 2025-06-07T04:18:07 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38648",
"html_url": "https://github.com/huggingface/transformers/pull/38648",
"diff_url": "https://github.com/huggingface/transformers/pull/38648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38648.patch",
"merged_at": null
} | Fixes #38268
### Feature Description
This PR implements the feature request to add sampling capabilities (e.g., Top-K, Top-P, temperature) to Group Beam Search, which was previously a purely greedy algorithm.
### Problem
Currently, `_group_beam_search` in the `GenerationMixin` is implemented as a deterministic process. After applying the diversity penalty to the logits, it always selects the highest-probability tokens using `torch.topk`. This prevents users from leveraging the creative and diverse outputs that stochastic sampling methods provide, which is especially useful for tasks like biological sequence or code generation.
### Solution
This implementation modifies `_group_beam_search` by adding a conditional path that is triggered when `generation_config.do_sample=True`. The new sampling path includes the following logic:
1. **Applies All Processors & Warpers:** It correctly applies all `LogitsProcessor`s (including the `ForcedDiversityLogitsProcessor`) and then applies the `LogitsWarper`s (for Temperature, Top-K, Top-P) to the scores.
2. **Safe Candidate Selection:** It safely calculates the number of candidates to sample by taking the `min()` of what the `beam_scorer` requires and the number of tokens available after warping, preventing potential `torch.multinomial` errors.
3. **Stochastic Sampling:** It uses `torch.multinomial` to stochastically sample candidate tokens from the resulting probability distribution.
4. **Score Gathering:** It gathers the log-scores of the sampled tokens to ensure compatibility with the rest of the beam search algorithm.
Additionally, the validation check in `generation/configuration_utils.py` that previously raised a `ValueError` for `do_sample=True` with group beam search has been removed to enable this new feature.
### Testing
The feature has been tested locally by running `model.generate` with `do_sample=True` and various sampling parameters (`temperature`, `top_k`, `top_p`). The tests confirm that:
1. The code runs without errors.
2. The generated output is stochastic and differs from the deterministic greedy output.
3. The generated output changes on subsequent runs, confirming that sampling is active.
--- Generating with Greedy Group Beam Search ---
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Greedy Outputs:
0: The best way to learn about large language models is to learn about the language.
1: The best way to learn about large language models is to learn about the language.
2: The best way to learn about large language models is to look at a few examples of how to use them.
The first step is to look at a few examples of how to use them. The second step is to look at a few examples of how to use them. The third step
3: The best way to learn about large language models is to look at a few examples of how to use them.
The first step is to look at a few examples of how to use them. The first step is to look at a few examples of how to use them. The second step
--- Generating with Sampling Group Beam Search ---
Sampling Outputs:
0: The best way to learn about large language models is to look at a few simple examples of how a language can be used to understand languages. For example, a simple example can look at a simple example of how a language can be used to understand languages. For example, a simple example can look at
1: The best way to learn about large language models is to find new ways to work around them.
When you start making your own languages, you should be careful not to think about how you are doing this.
You should not be just focusing on the tools that you
2: The best way to learn about large language models is to look at a few simple examples of how a language can be used to understand languages. For example, a simple example can look at a simple example of how an interpreter can be used to understand languages.
3: The best way to learn about large language models is to find new ways to work around them.
When you start making your own languages, you should be careful not to think about how you are doing this.
You should not be just focusing on the tools that are
--- Generating with MORE RANDOM Sampling Group Beam Search ---
More Random Sampling Outputs:
0: The best way to learn about large language models is through the research paper published in Psychological Science .
I was so lucky (because everyone else I was lucky to be with has been with) that it was very helpful to do some small tasks so I couldn't stop feeling inspired by
1: The best way to learn about large language models is through the research paper published in Psychological Science .
I was so lucky (because everyone else I was lucky to be with has been with) that it was very helpful to do some small tasks so I couldn't stop feeling very good
2: The best way to learn about large language models is to take a look at a few simple language modeling tutorials you can be sure you’ll learn a lot about the languages and frameworks that you’ll be used to writing them. These tutorials can be found in The Cucumber, Python
3: The best way to learn about large language models is to take a look at a few simple language modeling tutorials you can be sure you’ll learn a lot about the languages and frameworks that you’ll be used to writing them. These tutorials can be found in The A Language Modeler blog | {
"login": "gspeter-max",
"id": 193389584,
"node_id": "U_kgDOC4bkEA",
"avatar_url": "https://avatars.githubusercontent.com/u/193389584?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gspeter-max",
"html_url": "https://github.com/gspeter-max",
"followers_url": "https://api.github.com/users/gspeter-max/followers",
"following_url": "https://api.github.com/users/gspeter-max/following{/other_user}",
"gists_url": "https://api.github.com/users/gspeter-max/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gspeter-max/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gspeter-max/subscriptions",
"organizations_url": "https://api.github.com/users/gspeter-max/orgs",
"repos_url": "https://api.github.com/users/gspeter-max/repos",
"events_url": "https://api.github.com/users/gspeter-max/events{/privacy}",
"received_events_url": "https://api.github.com/users/gspeter-max/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38648/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38647/comments | https://api.github.com/repos/huggingface/transformers/issues/38647/events | https://github.com/huggingface/transformers/pull/38647 | 3,125,484,055 | PR_kwDOCUB6oc6ZbfRD | 38,647 | Reparent all the remaining Causal LM tests | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-06T17:58:42 | 2025-06-19T16:59:36 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38647",
"html_url": "https://github.com/huggingface/transformers/pull/38647",
"diff_url": "https://github.com/huggingface/transformers/pull/38647.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38647.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38647/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38646/comments | https://api.github.com/repos/huggingface/transformers/issues/38646/events | https://github.com/huggingface/transformers/pull/38646 | 3,125,475,465 | PR_kwDOCUB6oc6ZbdY- | 38,646 | Unbreak optimum-executorch | {
"login": "guangy10",
"id": 42389959,
"node_id": "MDQ6VXNlcjQyMzg5OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/42389959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guangy10",
"html_url": "https://github.com/guangy10",
"followers_url": "https://api.github.com/users/guangy10/followers",
"following_url": "https://api.github.com/users/guangy10/following{/other_user}",
"gists_url": "https://api.github.com/users/guangy10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guangy10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guangy10/subscriptions",
"organizations_url": "https://api.github.com/users/guangy10/orgs",
"repos_url": "https://api.github.com/users/guangy10/repos",
"events_url": "https://api.github.com/users/guangy10/events{/privacy}",
"received_events_url": "https://api.github.com/users/guangy10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T17:54:20 | 2025-06-13T09:13:32 | 2025-06-13T09:13:32 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38646",
"html_url": "https://github.com/huggingface/transformers/pull/38646",
"diff_url": "https://github.com/huggingface/transformers/pull/38646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38646.patch",
"merged_at": "2025-06-13T09:13:32"
} | # What does this PR do?
Revert minimal changes made from https://github.com/huggingface/transformers/pull/37866 that breaks export to ExecuTorch in [huggingface/optimum-executorch](https://github.com/huggingface/optimum-executorch) when developing from latest `transformers` trunk
TODO: Will update with tests shortly
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. I surfaced the issue in Slack
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @Cyrilvallez @ydshieh
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38646/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38645/comments | https://api.github.com/repos/huggingface/transformers/issues/38645/events | https://github.com/huggingface/transformers/pull/38645 | 3,125,446,286 | PR_kwDOCUB6oc6ZbXBy | 38,645 | support loading qwen3 gguf | {
"login": "44670",
"id": 3153194,
"node_id": "MDQ6VXNlcjMxNTMxOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3153194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/44670",
"html_url": "https://github.com/44670",
"followers_url": "https://api.github.com/users/44670/followers",
"following_url": "https://api.github.com/users/44670/following{/other_user}",
"gists_url": "https://api.github.com/users/44670/gists{/gist_id}",
"starred_url": "https://api.github.com/users/44670/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/44670/subscriptions",
"organizations_url": "https://api.github.com/users/44670/orgs",
"repos_url": "https://api.github.com/users/44670/repos",
"events_url": "https://api.github.com/users/44670/events{/privacy}",
"received_events_url": "https://api.github.com/users/44670/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T17:38:56 | 2025-07-15T09:54:09 | 2025-07-15T09:53:41 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38645",
"html_url": "https://github.com/huggingface/transformers/pull/38645",
"diff_url": "https://github.com/huggingface/transformers/pull/38645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38645.patch",
"merged_at": "2025-07-15T09:53:41"
} | # What does this PR do?
This PR adds gguf loading support for qwen3 models.
## Before submitting
- [No] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Yes] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [No afaik] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [Yes] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [No] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38645/reactions",
"total_count": 6,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38645/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38644/comments | https://api.github.com/repos/huggingface/transformers/issues/38644/events | https://github.com/huggingface/transformers/pull/38644 | 3,125,349,176 | PR_kwDOCUB6oc6ZbBML | 38,644 | feat: add sliding window attention support to Continuous Batching | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T17:02:13 | 2025-08-28T12:04:14 | 2025-08-28T12:04:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38644",
"html_url": "https://github.com/huggingface/transformers/pull/38644",
"diff_url": "https://github.com/huggingface/transformers/pull/38644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38644.patch",
"merged_at": null
} | # What does this PR do?
Add sliding window attention support to Continuous Batching code. Took inspiration from the existing `SlidingWindowCache`. | {
"login": "McPatate",
"id": 9112841,
"node_id": "MDQ6VXNlcjkxMTI4NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9112841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McPatate",
"html_url": "https://github.com/McPatate",
"followers_url": "https://api.github.com/users/McPatate/followers",
"following_url": "https://api.github.com/users/McPatate/following{/other_user}",
"gists_url": "https://api.github.com/users/McPatate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McPatate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McPatate/subscriptions",
"organizations_url": "https://api.github.com/users/McPatate/orgs",
"repos_url": "https://api.github.com/users/McPatate/repos",
"events_url": "https://api.github.com/users/McPatate/events{/privacy}",
"received_events_url": "https://api.github.com/users/McPatate/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38644/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38643/comments | https://api.github.com/repos/huggingface/transformers/issues/38643/events | https://github.com/huggingface/transformers/pull/38643 | 3,125,338,431 | PR_kwDOCUB6oc6Za-xc | 38,643 | Skip torchscript tests for 2 models | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T16:58:08 | 2025-06-06T18:17:39 | 2025-06-06T18:17:37 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38643",
"html_url": "https://github.com/huggingface/transformers/pull/38643",
"diff_url": "https://github.com/huggingface/transformers/pull/38643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38643.patch",
"merged_at": "2025-06-06T18:17:37"
} | # What does this PR do?
We won't actively maintain the torchscript stuff.
See #35972
Probably I need to do the same as in #35972 but you get the idea :-) | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38643/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38642/comments | https://api.github.com/repos/huggingface/transformers/issues/38642/events | https://github.com/huggingface/transformers/pull/38642 | 3,125,267,819 | PR_kwDOCUB6oc6Zau3V | 38,642 | Drop as_target_processor from the _call_ and pad methods | {
"login": "marcndo",
"id": 178362075,
"node_id": "U_kgDOCqGW2w",
"avatar_url": "https://avatars.githubusercontent.com/u/178362075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcndo",
"html_url": "https://github.com/marcndo",
"followers_url": "https://api.github.com/users/marcndo/followers",
"following_url": "https://api.github.com/users/marcndo/following{/other_user}",
"gists_url": "https://api.github.com/users/marcndo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcndo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcndo/subscriptions",
"organizations_url": "https://api.github.com/users/marcndo/orgs",
"repos_url": "https://api.github.com/users/marcndo/repos",
"events_url": "https://api.github.com/users/marcndo/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcndo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T16:31:41 | 2025-06-12T22:43:01 | 2025-06-09T19:26:09 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38642",
"html_url": "https://github.com/huggingface/transformers/pull/38642",
"diff_url": "https://github.com/huggingface/transformers/pull/38642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38642.patch",
"merged_at": "2025-06-09T19:26:09"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #38609
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). Yes
- [x] Did you read the [contributor guideline] (https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section? Yes
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38642/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38641/comments | https://api.github.com/repos/huggingface/transformers/issues/38641/events | https://github.com/huggingface/transformers/pull/38641 | 3,125,223,404 | PR_kwDOCUB6oc6ZalIP | 38,641 | Adds Universal Intelligence to awesome transformers documentation | {
"login": "victor-bluera",
"id": 171895173,
"node_id": "U_kgDOCj7phQ",
"avatar_url": "https://avatars.githubusercontent.com/u/171895173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/victor-bluera",
"html_url": "https://github.com/victor-bluera",
"followers_url": "https://api.github.com/users/victor-bluera/followers",
"following_url": "https://api.github.com/users/victor-bluera/following{/other_user}",
"gists_url": "https://api.github.com/users/victor-bluera/gists{/gist_id}",
"starred_url": "https://api.github.com/users/victor-bluera/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/victor-bluera/subscriptions",
"organizations_url": "https://api.github.com/users/victor-bluera/orgs",
"repos_url": "https://api.github.com/users/victor-bluera/repos",
"events_url": "https://api.github.com/users/victor-bluera/events{/privacy}",
"received_events_url": "https://api.github.com/users/victor-bluera/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T16:11:21 | 2025-10-25T16:31:22 | 2025-10-25T16:31:22 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38641",
"html_url": "https://github.com/huggingface/transformers/pull/38641",
"diff_url": "https://github.com/huggingface/transformers/pull/38641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38641.patch",
"merged_at": "2025-10-25T16:31:22"
} | # What does this PR do?
Adds [Universal Intelligence](https://github.com/blueraai/universal-intelligence) to `awesome-transformers.md` documentation
## Who can review?
Documentation: @stevhliu @LysandreJik
| {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38641/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38640/comments | https://api.github.com/repos/huggingface/transformers/issues/38640/events | https://github.com/huggingface/transformers/pull/38640 | 3,125,139,597 | PR_kwDOCUB6oc6ZaSod | 38,640 | Fix qwen2-audio chat template audio placeholder insertion | {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T15:37:10 | 2025-06-23T09:35:04 | 2025-06-09T09:56:42 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38640",
"html_url": "https://github.com/huggingface/transformers/pull/38640",
"diff_url": "https://github.com/huggingface/transformers/pull/38640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38640.patch",
"merged_at": "2025-06-09T09:56:42"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/vllm-project/vllm/pull/19230#discussion_r2131871974
- Correct `message['type'] == 'audio'` to `content['type'] == 'audio'` in Qwen2-Audio chat template, otherwise the audio placeholder can't be inserted to prompt properly for the below conversation:
```
conversation = [
{"role": "user", "content": [
{"type": "audio"},
{"type": "text", "text": "What does the person say?"},
]},
]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Isotr0py",
"id": 41363108,
"node_id": "MDQ6VXNlcjQxMzYzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/41363108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Isotr0py",
"html_url": "https://github.com/Isotr0py",
"followers_url": "https://api.github.com/users/Isotr0py/followers",
"following_url": "https://api.github.com/users/Isotr0py/following{/other_user}",
"gists_url": "https://api.github.com/users/Isotr0py/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Isotr0py/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Isotr0py/subscriptions",
"organizations_url": "https://api.github.com/users/Isotr0py/orgs",
"repos_url": "https://api.github.com/users/Isotr0py/repos",
"events_url": "https://api.github.com/users/Isotr0py/events{/privacy}",
"received_events_url": "https://api.github.com/users/Isotr0py/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38640/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38640/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38639/comments | https://api.github.com/repos/huggingface/transformers/issues/38639/events | https://github.com/huggingface/transformers/issues/38639 | 3,124,967,531 | I_kwDOCUB6oc66Qzhr | 38,639 | ImportError: cannot import name 'DTensor' from 'torch.distributed.tensor' | {
"login": "ybdong919",
"id": 16695937,
"node_id": "MDQ6VXNlcjE2Njk1OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/16695937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ybdong919",
"html_url": "https://github.com/ybdong919",
"followers_url": "https://api.github.com/users/ybdong919/followers",
"following_url": "https://api.github.com/users/ybdong919/following{/other_user}",
"gists_url": "https://api.github.com/users/ybdong919/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ybdong919/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ybdong919/subscriptions",
"organizations_url": "https://api.github.com/users/ybdong919/orgs",
"repos_url": "https://api.github.com/users/ybdong919/repos",
"events_url": "https://api.github.com/users/ybdong919/events{/privacy}",
"received_events_url": "https://api.github.com/users/ybdong919/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-06-06T14:33:07 | 2025-06-10T17:29:03 | 2025-06-09T13:12:55 | NONE | null | null | null | null | ### System Info
transformers/pytorch_utils.py", line 300, in id_tensor_storage
if is_torch_greater_or_equal_than_2_0:
from torch.distributed.tensor import DTensor
The error "ImportError: cannot import name 'dtensor' from 'torch.distributed.tensor'" arises due to changes in the location of the DTensor class within PyTorch's distributed package, specifically after version 2.5. Not after 2.0.
So "if is_torch_greater_or_equal_than_2_0: " should be "if is_torch_greater_or_equal_than_2_5:"
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Traceback (most recent call last):
File "/blue/bphl-florida/dongyibo/PPML/Geneformer/geneformer2/Geneformer/cancer_type_classify.py", line 25, in <module>
all_metrics = cc.validate(model_directory="/blue/bphl-florida/dongyibo/PPML/Geneformer/geneformer2/Geneformer/gf-12L-95M-i4096_CLcancer",
File "/blue/bphl-florida/dongyibo/PPML/Geneformer/geneformer2/Geneformer/geneformer/classifier.py", line 794, in validate
trainer = self.train_classifier(
File "/blue/bphl-florida/dongyibo/PPML/Geneformer/geneformer2/Geneformer/geneformer/classifier.py", line 1282, in train_classifier
trainer.train()
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 2240, in train
return inner_training_loop(
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 2656, in _inner_training_loop
self._maybe_log_save_evaluate(
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 3102, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial)
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 3199, in _save_checkpoint
self.save_model(output_dir, _internal_call=True)
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 3911, in save_model
self._save(output_dir)
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/trainer.py", line 4015, in _save
self.model.save_pretrained(
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3572, in save_pretrained
ptrs[id_tensor_storage(tensor)].append(name)
File "/blue/bphl-florida/dongyibo/conda/envs/DL/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 300, in id_tensor_storage
from torch.distributed.tensor import DTensor
ImportError: cannot import name 'DTensor' from 'torch.distributed.tensor'
### Expected behavior
pytorch_utils.py", line 299, in id_tensor_storage
"if is_torch_greater_or_equal_than_2_0:" should be "if is_torch_greater_or_equal_than_2_5:" | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38639/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38638/comments | https://api.github.com/repos/huggingface/transformers/issues/38638/events | https://github.com/huggingface/transformers/pull/38638 | 3,124,801,652 | PR_kwDOCUB6oc6ZZJiy | 38,638 | 5GB to 50GB as a default | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-06-06T13:33:05 | 2025-06-09T13:09:20 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38638",
"html_url": "https://github.com/huggingface/transformers/pull/38638",
"diff_url": "https://github.com/huggingface/transformers/pull/38638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38638.patch",
"merged_at": null
} | Will need to update tests | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38638/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38638/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/38637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38637/comments | https://api.github.com/repos/huggingface/transformers/issues/38637/events | https://github.com/huggingface/transformers/pull/38637 | 3,124,177,537 | PR_kwDOCUB6oc6ZXA5W | 38,637 | Fix attention mask expansion when converting to executorch | {
"login": "pweglik",
"id": 36445788,
"node_id": "MDQ6VXNlcjM2NDQ1Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/36445788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pweglik",
"html_url": "https://github.com/pweglik",
"followers_url": "https://api.github.com/users/pweglik/followers",
"following_url": "https://api.github.com/users/pweglik/following{/other_user}",
"gists_url": "https://api.github.com/users/pweglik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pweglik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pweglik/subscriptions",
"organizations_url": "https://api.github.com/users/pweglik/orgs",
"repos_url": "https://api.github.com/users/pweglik/repos",
"events_url": "https://api.github.com/users/pweglik/events{/privacy}",
"received_events_url": "https://api.github.com/users/pweglik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T09:16:58 | 2025-06-09T15:00:55 | 2025-06-09T15:00:55 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38637",
"html_url": "https://github.com/huggingface/transformers/pull/38637",
"diff_url": "https://github.com/huggingface/transformers/pull/38637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38637.patch",
"merged_at": "2025-06-09T15:00:55"
} | # What does this PR do?
I was trying to convert clip to Executorch and had an issue with `AttentionMaskConverter._expand_mask`
```
# load model
import torch
from transformers import AutoProcessor, AutoModel
model_clip = AutoModel.from_pretrained("openai/clip-vit-base-patch32", torch_dtype=torch.float16) # be ceraful with alternative attention implementations
processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32")
class TextModel(torch.nn.Module):
def __init__(self, text_model, text_projection):
super().__init__()
self.text_model =text_model
self.text_projection = text_projection
def forward(self, input_ids, attention_mask = None):
output = self.text_model(input_ids, attention_mask)
output = output.pooler_output
output = self.text_projection(output)
output = torch.flatten(output)
return output
text_model = TextModel(model_clip.text_model, model_clip.text_projection)
from executorch.backends.xnnpack.partition.xnnpack_partitioner import XnnpackPartitioner
from executorch.exir import to_edge_transform_and_lower
from torch.export import export
input_ids2 = (torch.randint(0, 20000, (1,76), dtype=torch.int64), torch.randint(0, 2, (1,76), dtype=torch.int))
exported_program = export(text_model, input_ids2)
executorch_program = to_edge_transform_and_lower(
exported_program,
partitioner = [XnnpackPartitioner()]
).to_executorch()
with open("clip_text_model.pte", "wb") as file:
file.write(executorch_program.buffer)
```
used to fail with:
```
SpecViolationError: These operators are taking Tensor inputs with mismatched dtypes:
Operator: <EdgeOpOverload: aten.sub.Tensor>: schema = aten::sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor with args: {'self': torch.float32, 'other': torch.float16, '__ret_0': torch.float16}
stack trace: File "/var/folders/wr/dzqrxx290wdd7hsn7dnrjn3r0000gn/T/ipykernel_84249/3576045078.py", line 8, in forward
output = self.text_model(input_ids, attention_mask)
File "/Users/przemek/anaconda3/envs/executorch-2/lib/python3.10/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/Users/przemek/anaconda3/envs/executorch-2/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 731, in forward
attention_mask = _prepare_4d_attention_mask(
File "/Users/przemek/anaconda3/envs/executorch-2/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 478, in _prepare_4d_attention_mask
return AttentionMaskConverter._expand_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)
File "/Users/przemek/anaconda3/envs/executorch-2/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 212, in _expand_mask
inverted_mask = 1.0 - expanded_mask
Please make sure the dtypes of the Tensor inputs are the same as the dtypes of the corresponding outputs.
```
It was caused by 1.0 being float32 while mask was converted to dtype of model (float16 in my case and that's what caused problems). After this small fix, it converts correctly.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes partially #32506
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38637/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38636/comments | https://api.github.com/repos/huggingface/transformers/issues/38636/events | https://github.com/huggingface/transformers/pull/38636 | 3,124,028,089 | PR_kwDOCUB6oc6ZWf5H | 38,636 | Skip `test_initialization` for `SwiftFormer` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T08:23:36 | 2025-06-06T08:47:12 | 2025-06-06T08:47:10 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38636",
"html_url": "https://github.com/huggingface/transformers/pull/38636",
"diff_url": "https://github.com/huggingface/transformers/pull/38636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38636.patch",
"merged_at": "2025-06-06T08:47:10"
} | # What does this PR do?
`SwiftFormer` has
```
class SwiftFormerEfficientAdditiveAttention(nn.Module):
def __init__(self, config: SwiftFormerConfig, dim: int = 512):
super().__init__()
self.w_g = nn.Parameter(torch.randn(dim, 1))
```
and `self.w_g` is not handled by `_init_weights` and also not by any thing like `initializer_range`, so it just a normal distribution with mean 0 and std 1, and make `test_initialization` quite flaky (> 10% failure rate).
This PR just skip `w_g` in the (already overwritten) test.
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38636/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38635/comments | https://api.github.com/repos/huggingface/transformers/issues/38635/events | https://github.com/huggingface/transformers/pull/38635 | 3,123,999,045 | PR_kwDOCUB6oc6ZWZb1 | 38,635 | [cache] make all classes cache compatible finally | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T08:12:02 | 2025-07-16T12:00:18 | 2025-07-16T12:00:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38635",
"html_url": "https://github.com/huggingface/transformers/pull/38635",
"diff_url": "https://github.com/huggingface/transformers/pull/38635.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38635.patch",
"merged_at": "2025-07-16T12:00:18"
} | # What does this PR do?
As per title, and let's get rid of `_supports_cache/_supports_quantized_cache` flags. We will assume all models from now on support cache and initialize a `DynamicCache` (model-specific cache in case of mamba) by default
For the static cache, we can't yet assume all models support it because even if the model technically can use `StaticCache`, it can't always compile fullgraph. We have an auto-compilation for static cache enabled, so maybe the compilation should check for smth like `_can_compile_fullgraph` and not `_supports_static_cache`?
I checked all models are updated and the generation tests are passing. Note, the PR depends on #38751 which cleans up non-generative models from `past_key_values` | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38635/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38635/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/38634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38634/comments | https://api.github.com/repos/huggingface/transformers/issues/38634/events | https://github.com/huggingface/transformers/issues/38634 | 3,123,973,333 | I_kwDOCUB6oc66NAzV | 38,634 | Download models from a private hub in 2025 | {
"login": "DanielSchuhmacher",
"id": 178552926,
"node_id": "U_kgDOCqSAXg",
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielSchuhmacher",
"html_url": "https://github.com/DanielSchuhmacher",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions",
"organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs",
"repos_url": "https://api.github.com/users/DanielSchuhmacher/repos",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-06-06T08:00:23 | 2025-06-13T13:45:13 | 2025-06-13T13:45:13 | NONE | null | null | null | null | ### Feature request
In the context of a private hub deployment, customers would like to use from_pretrained() to load models from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted.
This issue was raised before here: https://github.com/huggingface/transformers/issues/15514
@juliensimon
### Motivation
none
### Your contribution
none | {
"login": "DanielSchuhmacher",
"id": 178552926,
"node_id": "U_kgDOCqSAXg",
"avatar_url": "https://avatars.githubusercontent.com/u/178552926?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanielSchuhmacher",
"html_url": "https://github.com/DanielSchuhmacher",
"followers_url": "https://api.github.com/users/DanielSchuhmacher/followers",
"following_url": "https://api.github.com/users/DanielSchuhmacher/following{/other_user}",
"gists_url": "https://api.github.com/users/DanielSchuhmacher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanielSchuhmacher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanielSchuhmacher/subscriptions",
"organizations_url": "https://api.github.com/users/DanielSchuhmacher/orgs",
"repos_url": "https://api.github.com/users/DanielSchuhmacher/repos",
"events_url": "https://api.github.com/users/DanielSchuhmacher/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanielSchuhmacher/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38634/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/38633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/38633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/38633/comments | https://api.github.com/repos/huggingface/transformers/issues/38633/events | https://github.com/huggingface/transformers/pull/38633 | 3,123,323,845 | PR_kwDOCUB6oc6ZUH3R | 38,633 | log: Add logging when using split_batches and per_device_train_batch_size | {
"login": "KeshavSingh29",
"id": 130352102,
"node_id": "U_kgDOB8UD5g",
"avatar_url": "https://avatars.githubusercontent.com/u/130352102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KeshavSingh29",
"html_url": "https://github.com/KeshavSingh29",
"followers_url": "https://api.github.com/users/KeshavSingh29/followers",
"following_url": "https://api.github.com/users/KeshavSingh29/following{/other_user}",
"gists_url": "https://api.github.com/users/KeshavSingh29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KeshavSingh29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KeshavSingh29/subscriptions",
"organizations_url": "https://api.github.com/users/KeshavSingh29/orgs",
"repos_url": "https://api.github.com/users/KeshavSingh29/repos",
"events_url": "https://api.github.com/users/KeshavSingh29/events{/privacy}",
"received_events_url": "https://api.github.com/users/KeshavSingh29/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-06-06T01:33:44 | 2025-06-18T16:27:20 | 2025-06-18T16:26:46 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/38633",
"html_url": "https://github.com/huggingface/transformers/pull/38633",
"diff_url": "https://github.com/huggingface/transformers/pull/38633.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/38633.patch",
"merged_at": "2025-06-18T16:26:46"
} | # What does this PR do?
Adds logger warning when user uses `split_batches=True` in `accelerator_config` arg of Trainer.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
When using `split_batches=True` in `accelerator_config` arg of Trainer, all batches are split equally across all processes.
Hence, `per_device_train_batch_size` value is actually the aggregate sum of batch size split across all processes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- https://github.com/huggingface/transformers/issues/38484#issue-3101907511
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? No
## Who can review?
@SunMarc
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface and @SunMarc
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/38633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/38633/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.