url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[ms] | updated_at
timestamp[ms] | closed_at
timestamp[ms] | author_association
string | type
dict | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool | is_closed
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/41846
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41846/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41846/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41846/events
|
https://github.com/huggingface/transformers/issues/41846
| 3,549,231,256
|
I_kwDOCUB6oc7TjPiY
| 41,846
|
Incompatibility single-modality AutoProcessor and PEFT Adapter
|
{
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T12:25:44
| 2025-10-24T12:25:44
| null |
MEMBER
| null | null | null | null |
Hello!
## Bug report overview
* I've encountered an incompatibility between `AutoProcessor` and PEFT Adapters
## Details
Straight to the point:
```python
from transformers import AutoProcessor, AutoModel
from peft import LoraConfig, TaskType
model_name = "google-bert/bert-base-uncased"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoProcessor.from_pretrained(model_name)
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION
)
model.add_adapter(peft_config)
save_path = "peft_processor_test_path"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
loaded_model = AutoModel.from_pretrained(save_path)
loaded_tokenizer = AutoProcessor.from_pretrained(save_path)
print(loaded_model)
print(loaded_tokenizer)
```
Initially, I run into #4273, which is being resolved in #41604. When I use that branch, an issue appears:
```
Traceback (most recent call last):
File "c:\code\transformers\demo_test_peft_processor.py", line 21, in <module>
loaded_tokenizer = AutoProcessor.from_pretrained(save_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\transformers\src\transformers\models\auto\processing_auto.py", line 360, in from_pretrained
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\code\transformers\src\transformers\models\auto\configuration_auto.py", line 1383, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in peft_processor_test_path. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, cwm, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_moe, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, time_series_transformer, timesfm, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, vaultgemma, video_llama_3, video_llama_3_vision, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, vjepa2, voxtral, voxtral_encoder, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xcodec, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xlstm, xmod, yolos, yoso, zamba, zamba2, zoedepth
```
In short:
1. `AutoProcessor` loads a tokenizer as the model is single-modal.
2. A PEFT adapter is added.
3. The model and tokenizer are saved:
3a. due to the PEFT adapter we only save `adapter_config.json` and `adapter_model.safetensors`, and
3b. due to the tokenizer being single-modal we only save `tokenizer_config.json` etc.
4. When re-loading, the AutoProcessor 1) can't find a `processor_config.json` as it wasn't saved and 2) can't find a `config.json` as it wasn't saved.
This is bottlenecking my ability to use `AutoProcessor` as a catch-all processor initialization. Note that if the model is multimodal, then this issue does not happen, as then a `processor_config.json` is saved.
Perhaps a solution is to:
1. Wrap [this `config` initialization](https://github.com/huggingface/transformers/blob/bb6028cb7938e33ec9e75a0b47b27a6b75584151/src/transformers/models/auto/processing_auto.py#L345-L349) with a try-except
2. Use checks that `config` is not None [here](https://github.com/huggingface/transformers/blob/bb6028cb7938e33ec9e75a0b47b27a6b75584151/src/transformers/models/auto/processing_auto.py#L351-L354) and [here](https://github.com/huggingface/transformers/blob/bb6028cb7938e33ec9e75a0b47b27a6b75584151/src/transformers/models/auto/processing_auto.py#L360-L385)
3. so that we can reach the fallback section [here](https://github.com/huggingface/transformers/blob/bb6028cb7938e33ec9e75a0b47b27a6b75584151/src/transformers/models/auto/processing_auto.py#L387-L406)
But perhaps there's room for a more structural/permanent fix rather than more try-excepts to get to a fallback.
cc @BenjaminBossan for PEFT
- Tom Aarsen
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41846/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41845
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41845/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41845/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41845/events
|
https://github.com/huggingface/transformers/pull/41845
| 3,549,217,758
|
PR_kwDOCUB6oc6vgOc8
| 41,845
|
Fix conditional detr max size
|
{
"login": "nimeshakalanka",
"id": 90953455,
"node_id": "MDQ6VXNlcjkwOTUzNDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/90953455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nimeshakalanka",
"html_url": "https://github.com/nimeshakalanka",
"followers_url": "https://api.github.com/users/nimeshakalanka/followers",
"following_url": "https://api.github.com/users/nimeshakalanka/following{/other_user}",
"gists_url": "https://api.github.com/users/nimeshakalanka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nimeshakalanka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nimeshakalanka/subscriptions",
"organizations_url": "https://api.github.com/users/nimeshakalanka/orgs",
"repos_url": "https://api.github.com/users/nimeshakalanka/repos",
"events_url": "https://api.github.com/users/nimeshakalanka/events{/privacy}",
"received_events_url": "https://api.github.com/users/nimeshakalanka/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T12:22:53
| 2025-10-28T12:12:35
| 2025-10-28T12:12:34
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41845",
"html_url": "https://github.com/huggingface/transformers/pull/41845",
"diff_url": "https://github.com/huggingface/transformers/pull/41845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41845.patch",
"merged_at": null
}
|
## Description
This PR removes the deprecated `max_size` parameter from `ConditionalDetrImageProcessor.preprocess()` method.
## Fixes
Closes #37939
## Changes
- Removed `max_size` parameter from the `preprocess` method signature
- Removed all references to `max_size` in the method body
## Testing
- Ran existing tests for conditional_detr model
- Verified no deprecation warnings are raised
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41845/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41844
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41844/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41844/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41844/events
|
https://github.com/huggingface/transformers/pull/41844
| 3,548,687,951
|
PR_kwDOCUB6oc6veehj
| 41,844
|
Fix FSDPv2 checkpoint saving on TPU by using recursive unwrap
|
{
"login": "Nikhil172913832",
"id": 140622713,
"node_id": "U_kgDOCGG7eQ",
"avatar_url": "https://avatars.githubusercontent.com/u/140622713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nikhil172913832",
"html_url": "https://github.com/Nikhil172913832",
"followers_url": "https://api.github.com/users/Nikhil172913832/followers",
"following_url": "https://api.github.com/users/Nikhil172913832/following{/other_user}",
"gists_url": "https://api.github.com/users/Nikhil172913832/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nikhil172913832/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nikhil172913832/subscriptions",
"organizations_url": "https://api.github.com/users/Nikhil172913832/orgs",
"repos_url": "https://api.github.com/users/Nikhil172913832/repos",
"events_url": "https://api.github.com/users/Nikhil172913832/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nikhil172913832/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T10:15:45
| 2025-10-24T10:15:45
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41844",
"html_url": "https://github.com/huggingface/transformers/pull/41844",
"diff_url": "https://github.com/huggingface/transformers/pull/41844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41844.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes checkpoint saving for FSDPv2 (SPMD) on TPU by properly unwrapping nested FSDP wrappers before extracting the model state dict.
When using FSDPv2 on TPU, models have nested FSDP wrappers around each transformer layer. The previous implementation only unwrapped the top-level wrapper, causing the saved checkpoint to contain wrapped state dict keys instead of the actual model parameters. This resulted in:
- PEFT adapters not being saved in the correct format
- Model weights appearing unchanged after training
- Missing adapter keys when loading checkpoints
The fix uses `unwrap_model` with `recursive=True` specifically for FSDPv2 to unwrap all nested wrappers, then extracts the state dict from the fully unwrapped model. This ensures clean parameter keys in saved checkpoints while maintaining backward compatibility with FSDPv1 and other training configurations.
https://github.com/huggingface/transformers/issues/36004
Fixes #36004
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc @muellerzr
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41844/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41843
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41843/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41843/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41843/events
|
https://github.com/huggingface/transformers/pull/41843
| 3,548,254,065
|
PR_kwDOCUB6oc6vdGKO
| 41,843
|
Fix Qwen2Audio flash attention mask format for generation
|
{
"login": "Abdennacer-Badaoui",
"id": 106801897,
"node_id": "U_kgDOBl2q6Q",
"avatar_url": "https://avatars.githubusercontent.com/u/106801897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abdennacer-Badaoui",
"html_url": "https://github.com/Abdennacer-Badaoui",
"followers_url": "https://api.github.com/users/Abdennacer-Badaoui/followers",
"following_url": "https://api.github.com/users/Abdennacer-Badaoui/following{/other_user}",
"gists_url": "https://api.github.com/users/Abdennacer-Badaoui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abdennacer-Badaoui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abdennacer-Badaoui/subscriptions",
"organizations_url": "https://api.github.com/users/Abdennacer-Badaoui/orgs",
"repos_url": "https://api.github.com/users/Abdennacer-Badaoui/repos",
"events_url": "https://api.github.com/users/Abdennacer-Badaoui/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abdennacer-Badaoui/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T08:31:07
| 2025-10-24T12:46:00
| 2025-10-24T12:45:48
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41843",
"html_url": "https://github.com/huggingface/transformers/pull/41843",
"diff_url": "https://github.com/huggingface/transformers/pull/41843.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41843.patch",
"merged_at": "2025-10-24T12:45:48"
}
|
## What does this PR fix?
This PR fixes the `test_eager_matches_fa2_generate` test failure for Qwen2Audio by using the `create_bidirectional_mask` utility function to properly handle attention masks across different attention implementations.
The Qwen2Audio model was manually creating a 4D attention mask with `-inf` values for the audio encoder, regardless of the attention implementation being used. This caused issues with Flash Attention 2/3, which requires a 2D boolean mask (shape `(batch_size, seq_len)`) with `1` for valid tokens and `0` for padding.
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41843/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41842
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41842/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41842/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41842/events
|
https://github.com/huggingface/transformers/issues/41842
| 3,548,058,215
|
I_kwDOCUB6oc7TexJn
| 41,842
|
Incorrect usage of `num_items_in_batch`?
|
{
"login": "gohar94",
"id": 6470801,
"node_id": "MDQ6VXNlcjY0NzA4MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6470801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gohar94",
"html_url": "https://github.com/gohar94",
"followers_url": "https://api.github.com/users/gohar94/followers",
"following_url": "https://api.github.com/users/gohar94/following{/other_user}",
"gists_url": "https://api.github.com/users/gohar94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gohar94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gohar94/subscriptions",
"organizations_url": "https://api.github.com/users/gohar94/orgs",
"repos_url": "https://api.github.com/users/gohar94/repos",
"events_url": "https://api.github.com/users/gohar94/events{/privacy}",
"received_events_url": "https://api.github.com/users/gohar94/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T07:36:00
| 2025-10-24T11:04:21
| null |
NONE
| null | null | null | null |
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430).
However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Does it make sense to pass `num_items_in_batch` (for the whole batch) or should that number be for that particular input only?
Right now, the entire batch's `num_items_in_batch` is used [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2486).
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41842/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41841
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41841/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41841/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41841/events
|
https://github.com/huggingface/transformers/pull/41841
| 3,547,892,412
|
PR_kwDOCUB6oc6vb2VQ
| 41,841
|
Zeh vibevoice doc
|
{
"login": "zehua-w",
"id": 113515484,
"node_id": "U_kgDOBsQb3A",
"avatar_url": "https://avatars.githubusercontent.com/u/113515484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zehua-w",
"html_url": "https://github.com/zehua-w",
"followers_url": "https://api.github.com/users/zehua-w/followers",
"following_url": "https://api.github.com/users/zehua-w/following{/other_user}",
"gists_url": "https://api.github.com/users/zehua-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zehua-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zehua-w/subscriptions",
"organizations_url": "https://api.github.com/users/zehua-w/orgs",
"repos_url": "https://api.github.com/users/zehua-w/repos",
"events_url": "https://api.github.com/users/zehua-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/zehua-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T06:42:02
| 2025-10-24T06:43:02
| 2025-10-24T06:42:15
|
NONE
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41841",
"html_url": "https://github.com/huggingface/transformers/pull/41841",
"diff_url": "https://github.com/huggingface/transformers/pull/41841.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41841.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "zehua-w",
"id": 113515484,
"node_id": "U_kgDOBsQb3A",
"avatar_url": "https://avatars.githubusercontent.com/u/113515484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zehua-w",
"html_url": "https://github.com/zehua-w",
"followers_url": "https://api.github.com/users/zehua-w/followers",
"following_url": "https://api.github.com/users/zehua-w/following{/other_user}",
"gists_url": "https://api.github.com/users/zehua-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zehua-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zehua-w/subscriptions",
"organizations_url": "https://api.github.com/users/zehua-w/orgs",
"repos_url": "https://api.github.com/users/zehua-w/repos",
"events_url": "https://api.github.com/users/zehua-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/zehua-w/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41841/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41840
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41840/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41840/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41840/events
|
https://github.com/huggingface/transformers/pull/41840
| 3,547,863,334
|
PR_kwDOCUB6oc6vbwGU
| 41,840
|
Fix encoding and improve tokenizer testing logic
|
{
"login": "brittytino",
"id": 153193545,
"node_id": "U_kgDOCSGMSQ",
"avatar_url": "https://avatars.githubusercontent.com/u/153193545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brittytino",
"html_url": "https://github.com/brittytino",
"followers_url": "https://api.github.com/users/brittytino/followers",
"following_url": "https://api.github.com/users/brittytino/following{/other_user}",
"gists_url": "https://api.github.com/users/brittytino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brittytino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brittytino/subscriptions",
"organizations_url": "https://api.github.com/users/brittytino/orgs",
"repos_url": "https://api.github.com/users/brittytino/repos",
"events_url": "https://api.github.com/users/brittytino/events{/privacy}",
"received_events_url": "https://api.github.com/users/brittytino/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T06:30:23
| 2025-10-24T06:30:51
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41840",
"html_url": "https://github.com/huggingface/transformers/pull/41840",
"diff_url": "https://github.com/huggingface/transformers/pull/41840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41840.patch",
"merged_at": null
}
|
**Summary**
This pull request resolves multiple issues in the tokenizer parity testing script used to validate consistency between the slow (Python) and fast (Rust) tokenizers in the Hugging Face transformers library.
The changes correct dataset handling, fix incorrect API usage, and improve overall code clarity, performance, and maintainability.
**Key Changes**
1. Fixed Incorrect Dataset Field Access
> The original implementation attempted to access nested fields in the facebook/xnli dataset using:
> ```python
> for text in dataset[i]["premise"].values():
> for text in dataset[i]["hypothesis"]["translation"]:
> ```
However, the dataset structure is flat, with top-level premise and hypothesis keys. This caused TypeError exceptions during iteration.
**Updated Implementation:**
```python
for example in dataset:
test_string(slow, fast, example["premise"])
test_string(slow, fast, example["hypothesis"])
```
**2. Corrected Offset Mapping Handling in check_LTR_mark**
The previous code incorrectly accessed enc.offsets after calling encode_plus(), which returns a dictionary rather than an object.
This raised AttributeError exceptions.
**Fixed Implementation:**
```python
enc = fast.encode_plus(line, return_offsets_mapping=True)
offsets = enc["offset_mapping"]
```
Added proper boundary checks to handle cases where index positions are out of range.
**3. Added Missing Global Variable Declarations**
The counters (perfect, imperfect, wrong, total) were modified inside several functions without being declared as global.
This could lead to incorrect scope behavior or counter mismatches.
**Fix:**
Explicitly declared global variables inside the main execution block:
```python
if __name__ == "__main__":
global imperfect, perfect, wrong, total
```
**4. Refactored Dataset Iteration**
Replaced index-based iteration with direct dataset iteration for cleaner and more memory-efficient looping:
```python
for example in dataset:
...
```
This improves readability and eliminates unnecessary indexing operations.
**5. General Cleanup and Minor Improvements**
- Improved logging consistency and readability.
- Removed redundant temporary variables.
- Ensured all tokenizer checks run safely without unhandled exceptions.
- Maintained full backward compatibility with existing functionality.
**Results**
- The script now executes without errors on the facebook/xnli dataset.
- Tokenizer output comparisons between slow and fast implementations complete successfully.
- Accuracy reporting and detailed mismatch diagnostics function as intended.
- The script produces consistent results across multiple tokenizer architectures.
**Testing**
- Validated using BertTokenizer, XLMRobertaTokenizer, and MBartTokenizer.
- Confirmed parity check logic and reporting are correct.
- No exceptions encountered during execution.
- Accuracy metrics print as expected.
**Files Modified**
tokenizer_equivalence_test.py
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41840/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41839
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41839/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41839/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41839/events
|
https://github.com/huggingface/transformers/pull/41839
| 3,547,805,201
|
PR_kwDOCUB6oc6vbjZj
| 41,839
|
unpin torch/torchcodec for CircleCI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T06:08:20
| 2025-10-24T06:19:40
| 2025-10-24T06:19:38
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41839",
"html_url": "https://github.com/huggingface/transformers/pull/41839",
"diff_url": "https://github.com/huggingface/transformers/pull/41839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41839.patch",
"merged_at": "2025-10-24T06:19:38"
}
|
# What does this PR do?
We can run CircleCI with torch 2.9 + torchcodec 0.8 now, after a minor change in `src/transformers/video_utils.py`, which is discussed on slack
https://huggingface.slack.com/archives/C3PDTEV8E/p1761255740979019?thread_ts=1760709571.854159&cid=C3PDTEV8E
> please use the device=kwargs.get("device", "cpu"), workaround for now
We didn't officially support device=None (the Optional type annotation was a mistake). So your workaround is the correct fix.
We may eventually support device=None (https://github.com/meta-pytorch/torchcodec/issues/993) but it won't always default to CPU like it used to
and zucchini-nlp is OK with that:
https://huggingface.slack.com/archives/D06HT56C0HF/p1761123374935529
> let's just make cpu default device untill fixed then
[10:56](https://huggingface.slack.com/archives/D06HT56C0HF/p1761123382729509)
with a TODO comment
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41839/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41838
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41838/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41838/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41838/events
|
https://github.com/huggingface/transformers/issues/41838
| 3,547,793,022
|
I_kwDOCUB6oc7TdwZ-
| 41,838
|
ONNX export vmap functorch"
|
{
"login": "Worke1221",
"id": 184335759,
"node_id": "U_kgDOCvy9jw",
"avatar_url": "https://avatars.githubusercontent.com/u/184335759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Worke1221",
"html_url": "https://github.com/Worke1221",
"followers_url": "https://api.github.com/users/Worke1221/followers",
"following_url": "https://api.github.com/users/Worke1221/following{/other_user}",
"gists_url": "https://api.github.com/users/Worke1221/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Worke1221/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Worke1221/subscriptions",
"organizations_url": "https://api.github.com/users/Worke1221/orgs",
"repos_url": "https://api.github.com/users/Worke1221/repos",
"events_url": "https://api.github.com/users/Worke1221/events{/privacy}",
"received_events_url": "https://api.github.com/users/Worke1221/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-24T06:03:34
| 2025-10-24T08:29:08
| null |
NONE
| null | null | null | null |
### Feature request
## Environment info
- `transformers` version: 4.44.2 (or your version)
- Platform: Linux (Ubuntu 22.04)
- Python version: 3.10.12
- PyTorch version (GPU?): 2.7.0+cu126
- Using GPU in script?: Yes (but export fails on CPU too)
- ONNX Runtime version: 1.18.0 (if applicable)
## Information
Model I am using: **Qwen/Qwen2-0.5B**
The problem arises when using:
- [x] the official example scripts: (give details below)
- [x] my own modified scripts: (see below)
The tasks I am working on:
- [x] ONNX export of causal language model
## To reproduce
Steps to reproduce the behavior:
1. Run this minimal export script:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Qwen/Qwen2-0.5B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Dummy input
input_ids = torch.randint(0, 1000, (1, 512))
attention_mask = torch.ones_like(input_ids)
# Temporarily disable cache for export
model.config.use_cache = False
torch.onnx.export(
model,
(input_ids, attention_mask),
"qwen2.onnx",
input_names=["input_ids", "attention_mask"],
output_names=["logits"],
dynamic_axes={
"input_ids": {0: "batch", 1: "seq"},
"attention_mask": {0: "batch", 1: "seq"},
"logits": {0: "batch", 1: "seq"}
},
opset_version=14
)
### Motivation
## Environment info
- `transformers` version: 4.44.2 (or your version)
- Platform: Linux (Ubuntu 22.04)
- Python version: 3.10.12
- PyTorch version (GPU?): 2.7.0+cu126
- Using GPU in script?: Yes (but export fails on CPU too)
- ONNX Runtime version: 1.18.0 (if applicable)
## Information
Model I am using: **Qwen/Qwen2-0.5B**
The problem arises when using:
- [x] the official example scripts: (give details below)
- [x] my own modified scripts: (see below)
The tasks I am working on:
- [x] ONNX export of causal language model
## To reproduce
Steps to reproduce the behavior:
1. Run this minimal export script:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Qwen/Qwen2-0.5B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Dummy input
input_ids = torch.randint(0, 1000, (1, 512))
attention_mask = torch.ones_like(input_ids)
# Temporarily disable cache for export
model.config.use_cache = False
torch.onnx.export(
model,
(input_ids, attention_mask),
"qwen2.onnx",
input_names=["input_ids", "attention_mask"],
output_names=["logits"],
dynamic_axes={
"input_ids": {0: "batch", 1: "seq"},
"attention_mask": {0: "batch", 1: "seq"},
"logits": {0: "batch", 1: "seq"}
},
opset_version=14
)
### Your contribution
## Environment info
- `transformers` version: 4.44.2 (or your version)
- Platform: Linux (Ubuntu 22.04)
- Python version: 3.10.12
- PyTorch version (GPU?): 2.7.0+cu126
- Using GPU in script?: Yes (but export fails on CPU too)
- ONNX Runtime version: 1.18.0 (if applicable)
## Information
Model I am using: **Qwen/Qwen2-0.5B**
The problem arises when using:
- [x] the official example scripts: (give details below)
- [x] my own modified scripts: (see below)
The tasks I am working on:
- [x] ONNX export of causal language model
## To reproduce
Steps to reproduce the behavior:
1. Run this minimal export script:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "Qwen/Qwen2-0.5B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Dummy input
input_ids = torch.randint(0, 1000, (1, 512))
attention_mask = torch.ones_like(input_ids)
# Temporarily disable cache for export
model.config.use_cache = False
torch.onnx.export(
model,
(input_ids, attention_mask),
"qwen2.onnx",
input_names=["input_ids", "attention_mask"],
output_names=["logits"],
dynamic_axes={
"input_ids": {0: "batch", 1: "seq"},
"attention_mask": {0: "batch", 1: "seq"},
"logits": {0: "batch", 1: "seq"}
},
opset_version=14
)
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41838/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41837
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41837/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41837/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41837/events
|
https://github.com/huggingface/transformers/pull/41837
| 3,547,459,703
|
PR_kwDOCUB6oc6vaYf2
| 41,837
|
multiple tokenizers with different filenames can save now
|
{
"login": "aijadugar",
"id": 139578960,
"node_id": "U_kgDOCFHOUA",
"avatar_url": "https://avatars.githubusercontent.com/u/139578960?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aijadugar",
"html_url": "https://github.com/aijadugar",
"followers_url": "https://api.github.com/users/aijadugar/followers",
"following_url": "https://api.github.com/users/aijadugar/following{/other_user}",
"gists_url": "https://api.github.com/users/aijadugar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aijadugar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aijadugar/subscriptions",
"organizations_url": "https://api.github.com/users/aijadugar/orgs",
"repos_url": "https://api.github.com/users/aijadugar/repos",
"events_url": "https://api.github.com/users/aijadugar/events{/privacy}",
"received_events_url": "https://api.github.com/users/aijadugar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-24T03:23:11
| 2025-10-27T15:23:11
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41837",
"html_url": "https://github.com/huggingface/transformers/pull/41837",
"diff_url": "https://github.com/huggingface/transformers/pull/41837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41837.patch",
"merged_at": null
}
|
# What does this PR do?
This PR fixes an issue where saving a custom Processor that includes multiple sub-tokenizers of the same type caused them to overwrite each other during serialization.
The root cause was that all sub-components were being saved using the same default filenames, leading to collisions.
This update introduces unique naming and loading logic in the ProcessorMixin save/load methods, allowing processors with multiple tokenizers to be safely saved and reloaded without data loss.
Fixes #41816
## Before submitting
I have read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request).
The change was discussed in [issue #41816](https://github.com/huggingface/transformers/issues/41816).
I’ve tested the processor save/load logic locally with multiple tokenizers.
No documentation changes were required.
Added/verified tests for multiple sub-tokenizers loading correctly.
## Who can review?
Tagging maintainers familiar with processor and tokenizer internals:
@CyrilVallez
@ArthurZucker
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41837/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41836
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41836/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41836/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41836/events
|
https://github.com/huggingface/transformers/pull/41836
| 3,547,401,414
|
PR_kwDOCUB6oc6vaL6I
| 41,836
|
Remove redundant code from Qwen3VLProcessor
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T02:55:27
| 2025-10-24T12:11:42
| 2025-10-24T11:08:49
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41836",
"html_url": "https://github.com/huggingface/transformers/pull/41836",
"diff_url": "https://github.com/huggingface/transformers/pull/41836.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41836.patch",
"merged_at": "2025-10-24T11:08:49"
}
|
# What does this PR do?
As per title.
There are two identical lines assigning `video_grid_thw = videos_inputs["video_grid_thw"]` in `Qwen3VLProcessor.__call__()`. One of them can be safely removed for cleaner and more concise code.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41836/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41835
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41835/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41835/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41835/events
|
https://github.com/huggingface/transformers/pull/41835
| 3,547,364,415
|
PR_kwDOCUB6oc6vaD0-
| 41,835
|
Remove qwen3vl redundant code
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-24T02:37:03
| 2025-10-24T16:14:14
| 2025-10-24T02:39:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41835",
"html_url": "https://github.com/huggingface/transformers/pull/41835",
"diff_url": "https://github.com/huggingface/transformers/pull/41835.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41835.patch",
"merged_at": null
}
|
# What does this PR do?
As per title.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41835/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41834
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41834/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41834/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41834/events
|
https://github.com/huggingface/transformers/pull/41834
| 3,546,892,960
|
PR_kwDOCUB6oc6vYdc7
| 41,834
|
T5gemma2
|
{
"login": "bzhangGo",
"id": 17406686,
"node_id": "MDQ6VXNlcjE3NDA2Njg2",
"avatar_url": "https://avatars.githubusercontent.com/u/17406686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzhangGo",
"html_url": "https://github.com/bzhangGo",
"followers_url": "https://api.github.com/users/bzhangGo/followers",
"following_url": "https://api.github.com/users/bzhangGo/following{/other_user}",
"gists_url": "https://api.github.com/users/bzhangGo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzhangGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzhangGo/subscriptions",
"organizations_url": "https://api.github.com/users/bzhangGo/orgs",
"repos_url": "https://api.github.com/users/bzhangGo/repos",
"events_url": "https://api.github.com/users/bzhangGo/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzhangGo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T22:58:28
| 2025-10-28T15:39:48
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41834",
"html_url": "https://github.com/huggingface/transformers/pull/41834",
"diff_url": "https://github.com/huggingface/transformers/pull/41834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41834.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add support for T5Gemma2 with multi-modal and long-context capability.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41834/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41833
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41833/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41833/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41833/events
|
https://github.com/huggingface/transformers/pull/41833
| 3,546,886,157
|
PR_kwDOCUB6oc6vYcCT
| 41,833
|
extend fp_quant cases to xpu
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T22:55:12
| 2025-10-29T15:27:54
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41833",
"html_url": "https://github.com/huggingface/transformers/pull/41833",
"diff_url": "https://github.com/huggingface/transformers/pull/41833.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41833.patch",
"merged_at": null
}
|
w/ FP-Quant PR https://github.com/IST-DASLab/FP-Quant/pull/11 merged, all pseudoquant cases pass with triton kernel on XPU. For next-gen XPU which support native mxfp4/nvfp4, will upstream once they are ready. @ydshieh, pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41833/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41832
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41832/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41832/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41832/events
|
https://github.com/huggingface/transformers/pull/41832
| 3,546,744,322
|
PR_kwDOCUB6oc6vX8h4
| 41,832
|
HF Trainer: ALST/Ulysses sequence parallelism integration via HF Accelerate
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T21:55:53
| 2025-10-28T20:39:25
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41832",
"html_url": "https://github.com/huggingface/transformers/pull/41832",
"diff_url": "https://github.com/huggingface/transformers/pull/41832.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41832.patch",
"merged_at": null
}
|
Integrates HF Accelerate's support for ALST/Ulysses sequence parallelism https://github.com/huggingface/accelerate/pull/3817 into HF Trainer
TODO:
- [ ] docs - no idea where? the FSDP/CP is not documented, or any parallelism for that matter.
- [x] tests
- [ ] need to merge https://github.com/huggingface/accelerate/pull/3817
- [ ] need to wait for a new HF Accelerate release and to use its version in compatibility checks in the code 1.11.1 most likely
- [x] need to wait for a merge of https://github.com/deepspeedai/DeepSpeed/pull/7649
- [ ] need to wait for a new Deepspeed release after above is merged
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41832/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41831
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41831/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41831/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41831/events
|
https://github.com/huggingface/transformers/pull/41831
| 3,546,660,816
|
PR_kwDOCUB6oc6vXqKG
| 41,831
|
extend bitnet cases to xpu, all 8 cases pass
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T21:28:42
| 2025-10-24T15:49:43
| 2025-10-24T09:05:13
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41831",
"html_url": "https://github.com/huggingface/transformers/pull/41831",
"diff_url": "https://github.com/huggingface/transformers/pull/41831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41831.patch",
"merged_at": "2025-10-24T09:05:13"
}
|
@ydshieh, pls help review, thx very much.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41831/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41830
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41830/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41830/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41830/events
|
https://github.com/huggingface/transformers/pull/41830
| 3,546,619,265
|
PR_kwDOCUB6oc6vXg3K
| 41,830
|
fix continuous batching issues, extend ut cases to xpu
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T21:17:20
| 2025-10-29T15:27:09
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41830",
"html_url": "https://github.com/huggingface/transformers/pull/41830",
"diff_url": "https://github.com/huggingface/transformers/pull/41830.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41830.patch",
"merged_at": null
}
|
@SunMarc , pls help review, thx very much.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41830/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41829
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41829/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41829/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41829/events
|
https://github.com/huggingface/transformers/pull/41829
| 3,546,599,379
|
PR_kwDOCUB6oc6vXcdz
| 41,829
|
extend 2 trainer test cases to xpu
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T21:11:34
| 2025-10-24T15:53:40
| 2025-10-24T09:11:15
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41829",
"html_url": "https://github.com/huggingface/transformers/pull/41829",
"diff_url": "https://github.com/huggingface/transformers/pull/41829.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41829.patch",
"merged_at": "2025-10-24T09:11:15"
}
|
@ydshieh, pls help review, thx very much.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41829/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41828
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41828/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41828/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41828/events
|
https://github.com/huggingface/transformers/pull/41828
| 3,545,987,066
|
PR_kwDOCUB6oc6vVU2C
| 41,828
|
[`IGNORE`] Testing something
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T18:17:49
| 2025-10-24T09:28:15
| 2025-10-24T09:28:15
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41828",
"html_url": "https://github.com/huggingface/transformers/pull/41828",
"diff_url": "https://github.com/huggingface/transformers/pull/41828.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41828.patch",
"merged_at": null
}
|
Dont merge
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41828/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41827
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41827/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41827/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41827/events
|
https://github.com/huggingface/transformers/pull/41827
| 3,545,762,116
|
PR_kwDOCUB6oc6vUjkQ
| 41,827
|
[`Flash Attention`] Disable packed sequences with pos ids only during torch compile
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T17:20:36
| 2025-10-23T17:28:58
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41827",
"html_url": "https://github.com/huggingface/transformers/pull/41827",
"diff_url": "https://github.com/huggingface/transformers/pull/41827.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41827.patch",
"merged_at": null
}
|
Draft, only as a reference as what could be done. It would allow for full graph compile when using no attention mask.
Supported compile:
- Bsz 1
- No mask
- Before: No full graph, recompilations
- After: Full graph
- Attn mask
- Before: No full graph, recompilations
- After: No full graph, recompilations
- Pos ids, no mask
- Before: No full graph, recompilations
- After: Not supported, silent wrong computations (if packed)
- Fa kwargs, no mask
- Before: Full graph
- After: Full graph
- Bsz > 1
- No mask
- Before: Full graph
- After: Full graph
- Attn mask
- Before: Same as bsz 1
- After: Same as bsz 1
Tl;dr: core changes are
- No attn mask: Full graph support vs recompilations and no full graph (bsz == 1)
- Position ids but no attn mask: Not supported for compile vs recompilations and no full graph
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41827/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41826
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41826/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41826/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41826/events
|
https://github.com/huggingface/transformers/issues/41826
| 3,545,702,940
|
I_kwDOCUB6oc7TVyIc
| 41,826
|
Integrating TiledMLP for a much smaller memory footprint
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T17:07:21
| 2025-10-24T14:56:33
| null |
CONTRIBUTOR
| null | null | null | null |
Similar to `TiledFusedLogitsLoss` https://github.com/huggingface/transformers/issues/41306 it's of a great benefit to perform `TiledMLP` as well, which can save a lot of HBM memory and allow longer seqlen/larger batch size or even enable a bs=1 training which wasn't possible before.
This technology comes from the Arctic Long Sequence Training https://arxiv.org/abs/2506.13996 which allowed us to train in bf16 with 500K tokens on a single H100 GPU, 3.7M on a single node, and 15M on Llama-8B using just four nodes. You will want to refer to section 3.1.1 TiledMLP for the full details.
Tiled MLP processes `hidden_states` in small slices. If we extract a single LlamaMLP layer from Llama-8B and run a bf16 hidden_states tensor of shape `[1, 256_000, 4096]` through its forward-backward, without and with sequence dimension tiling, we get about 10x memory saved as can be seen from the memory profiler plot:
<img width="1477" height="386" alt="Image" src="https://github.com/user-attachments/assets/ef8a66c4-a268-4ceb-aade-d1e0c0be5ade" />
There is the additional cost of recomputing forward path. So we trade time for memory.
Currently, we hack it in with monkey patching:
```
module_path = f"transformers.models.{model_type}.modeling_{model_type}"
model_cls_prefix, _ = get_causal_lm_model_cls_prefix(model_type)
module = __import__(module_path, fromlist=[f"{model_cls_prefix}MLP"])
mlp_cls = getattr(module, f"{model_cls_prefix}MLP")
setattr(mlp_cls, "forward", tiled_mlp_forward_common)
```
You can see the full code here:
https://github.com/snowflakedb/ArcticTraining/blob/5959f72709ac40433e94d09d095c021e7466cf0d/arctic_training/model/tiled_compute.py
This could be a built-in feature and enabled with a simple flag to `from_pretrained(enable_tiled_mlp=True, ...)`
Implementation-wise - most LLMs archs use the exact same MLP module, so the common case is very easy. Some are slightly different and would require some special cases. But perhaps those use cases could be added on demand if users request those, making the initial integration very simple (i.e. start with supporting just the models that use the common MLP.
`TiledMLP` lives here https://github.com/deepspeedai/DeepSpeed/blob/3631712bd796593bf6e2e70d4f1c352937a44cf8/deepspeed/runtime/sequence_parallel/ulysses_sp.py#L807 but you can copy it if you don't want to make a dependency on `deepspeed`. It's a small independent self-contained autograd function.
cc: @Rocketknight1
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41826/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41826/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41825
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41825/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41825/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41825/events
|
https://github.com/huggingface/transformers/pull/41825
| 3,545,683,166
|
PR_kwDOCUB6oc6vURu9
| 41,825
|
extend 2 blip2 and falcon_h1 test cases to xpu
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T17:03:04
| 2025-10-24T15:51:33
| 2025-10-24T09:15:16
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41825",
"html_url": "https://github.com/huggingface/transformers/pull/41825",
"diff_url": "https://github.com/huggingface/transformers/pull/41825.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41825.patch",
"merged_at": "2025-10-24T09:15:16"
}
|
@ydshieh, pls help review, thx very much.
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41825/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41824
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41824/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41824/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41824/events
|
https://github.com/huggingface/transformers/pull/41824
| 3,545,661,877
|
PR_kwDOCUB6oc6vUM5b
| 41,824
|
Fix const parsing for dict inputs in chat schemas
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T16:58:31
| 2025-10-24T14:14:08
| 2025-10-24T14:14:06
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41824",
"html_url": "https://github.com/huggingface/transformers/pull/41824",
"diff_url": "https://github.com/huggingface/transformers/pull/41824.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41824.patch",
"merged_at": "2025-10-24T14:14:06"
}
|
Found one more bug in chat schemas and updated the tests to cover it! `const` nodes were not being handled correctly when the parent node had structured content.
This PR also moves `test_chat_schema_utils.py` to `test_chat_parsing_utils.py` to match the name of the actual file `utils/chat_parsing_utils.py`.
Tests may not be running in the CI yet but I ran them all locally and they're passing!
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41824/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41823
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41823/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41823/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41823/events
|
https://github.com/huggingface/transformers/pull/41823
| 3,545,614,463
|
PR_kwDOCUB6oc6vUCe7
| 41,823
|
Lfm2-VL vllm
|
{
"login": "paulpak58",
"id": 52512091,
"node_id": "MDQ6VXNlcjUyNTEyMDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/52512091?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulpak58",
"html_url": "https://github.com/paulpak58",
"followers_url": "https://api.github.com/users/paulpak58/followers",
"following_url": "https://api.github.com/users/paulpak58/following{/other_user}",
"gists_url": "https://api.github.com/users/paulpak58/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulpak58/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulpak58/subscriptions",
"organizations_url": "https://api.github.com/users/paulpak58/orgs",
"repos_url": "https://api.github.com/users/paulpak58/repos",
"events_url": "https://api.github.com/users/paulpak58/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulpak58/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T16:46:32
| 2025-10-23T16:47:35
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41823",
"html_url": "https://github.com/huggingface/transformers/pull/41823",
"diff_url": "https://github.com/huggingface/transformers/pull/41823.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41823.patch",
"merged_at": null
}
|
Fixes to integrate lfm2-vl into vllm
Currently on 4.57.1 due to cuda mp issues between transformers-dev + vllm-dev
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41823/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41822
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41822/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41822/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41822/events
|
https://github.com/huggingface/transformers/issues/41822
| 3,545,475,100
|
I_kwDOCUB6oc7TU6gc
| 41,822
|
Add links for issues labelled “Good First Issue” in the CONTRIBUTING guide
|
{
"login": "Dippp10",
"id": 163040639,
"node_id": "U_kgDOCbfNfw",
"avatar_url": "https://avatars.githubusercontent.com/u/163040639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dippp10",
"html_url": "https://github.com/Dippp10",
"followers_url": "https://api.github.com/users/Dippp10/followers",
"following_url": "https://api.github.com/users/Dippp10/following{/other_user}",
"gists_url": "https://api.github.com/users/Dippp10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dippp10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dippp10/subscriptions",
"organizations_url": "https://api.github.com/users/Dippp10/orgs",
"repos_url": "https://api.github.com/users/Dippp10/repos",
"events_url": "https://api.github.com/users/Dippp10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dippp10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T16:16:51
| 2025-10-24T10:56:04
| 2025-10-24T10:56:04
|
NONE
| null | null | null | null |
@A-Mahla ji, The repository maintains a list of issues tagged “Good First Issue”. I could propose in the issue: update the CONTRIBUTING guide to include a link or section: “See current Good First Issues” (with link to search).
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41822/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41821
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41821/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41821/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41821/events
|
https://github.com/huggingface/transformers/pull/41821
| 3,545,407,671
|
PR_kwDOCUB6oc6vTZKV
| 41,821
|
Share embedding modules in BART, not only weights
|
{
"login": "githubnemo",
"id": 264196,
"node_id": "MDQ6VXNlcjI2NDE5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/264196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubnemo",
"html_url": "https://github.com/githubnemo",
"followers_url": "https://api.github.com/users/githubnemo/followers",
"following_url": "https://api.github.com/users/githubnemo/following{/other_user}",
"gists_url": "https://api.github.com/users/githubnemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubnemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubnemo/subscriptions",
"organizations_url": "https://api.github.com/users/githubnemo/orgs",
"repos_url": "https://api.github.com/users/githubnemo/repos",
"events_url": "https://api.github.com/users/githubnemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubnemo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T16:04:05
| 2025-10-24T15:22:02
| 2025-10-24T15:22:02
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41821",
"html_url": "https://github.com/huggingface/transformers/pull/41821",
"diff_url": "https://github.com/huggingface/transformers/pull/41821.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41821.patch",
"merged_at": "2025-10-24T15:22:02"
}
|
Embedding modules are now shared between encoder, decoder and shared - it is the same module, like in the T5 implementation.
This has the benefit that it does not matter which module is returned in `get_input_embeddings`, the caller of the latter can be sure that modifications done to that (e.g., hooks) apply to the embeddings.
Background: While revamping the gradient checkpointing tests in PEFT via peft#2860 we found that the gradient enable step
(`modeling_utils.enable_input_require_grads`) does not work for BART. This leads to gradient checkpointing with `use_reentrant=True` to fail as it will not detect any gradients. The reason for this is that the returned value by `get_input_embeddings` (`self.shared`) is not the module that is called in the encoder, therefore any hooks added to `self.shared` are not run - in this case the hook set by `enable_input_require_grads`.
Since the background is a missing hook I've added a test that tests directly the ability to define hooks and their ability to being called.
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41821/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41820
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41820/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41820/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41820/events
|
https://github.com/huggingface/transformers/pull/41820
| 3,545,253,572
|
PR_kwDOCUB6oc6vS7IH
| 41,820
|
Fix processor saving with multiple tokenizers
|
{
"login": "ManishhDev",
"id": 162974418,
"node_id": "U_kgDOCbbK0g",
"avatar_url": "https://avatars.githubusercontent.com/u/162974418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManishhDev",
"html_url": "https://github.com/ManishhDev",
"followers_url": "https://api.github.com/users/ManishhDev/followers",
"following_url": "https://api.github.com/users/ManishhDev/following{/other_user}",
"gists_url": "https://api.github.com/users/ManishhDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManishhDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManishhDev/subscriptions",
"organizations_url": "https://api.github.com/users/ManishhDev/orgs",
"repos_url": "https://api.github.com/users/ManishhDev/repos",
"events_url": "https://api.github.com/users/ManishhDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManishhDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T15:33:22
| 2025-10-23T19:16:59
| 2025-10-23T19:16:54
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41820",
"html_url": "https://github.com/huggingface/transformers/pull/41820",
"diff_url": "https://github.com/huggingface/transformers/pull/41820.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41820.patch",
"merged_at": null
}
|
Resolves issue #41816
When a processor contains multiple tokenizers (or other sub-processors of the same type), they were overwriting each other during save due to using the same file names. This fix:
1. Saves each attribute in its own subdirectory named after the attribute name
2. Updates loading logic to check for subdirectories first (new format), then falls back to main directory (legacy format) for backward compatibility
This maintains backward compatibility while allowing processors with multiple tokenizers to save and load correctly.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "ManishhDev",
"id": 162974418,
"node_id": "U_kgDOCbbK0g",
"avatar_url": "https://avatars.githubusercontent.com/u/162974418?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ManishhDev",
"html_url": "https://github.com/ManishhDev",
"followers_url": "https://api.github.com/users/ManishhDev/followers",
"following_url": "https://api.github.com/users/ManishhDev/following{/other_user}",
"gists_url": "https://api.github.com/users/ManishhDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ManishhDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ManishhDev/subscriptions",
"organizations_url": "https://api.github.com/users/ManishhDev/orgs",
"repos_url": "https://api.github.com/users/ManishhDev/repos",
"events_url": "https://api.github.com/users/ManishhDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/ManishhDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41820/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41819
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41819/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41819/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41819/events
|
https://github.com/huggingface/transformers/issues/41819
| 3,545,203,594
|
I_kwDOCUB6oc7TT4OK
| 41,819
|
IndexError: tuple index out of range when using Tensor Parallelism with FSDP2 on GPT-OSS 20B (tensor_parallel.py, line 510)
|
{
"login": "JdRion",
"id": 113088158,
"node_id": "U_kgDOBr2Wng",
"avatar_url": "https://avatars.githubusercontent.com/u/113088158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JdRion",
"html_url": "https://github.com/JdRion",
"followers_url": "https://api.github.com/users/JdRion/followers",
"following_url": "https://api.github.com/users/JdRion/following{/other_user}",
"gists_url": "https://api.github.com/users/JdRion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JdRion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JdRion/subscriptions",
"organizations_url": "https://api.github.com/users/JdRion/orgs",
"repos_url": "https://api.github.com/users/JdRion/repos",
"events_url": "https://api.github.com/users/JdRion/events{/privacy}",
"received_events_url": "https://api.github.com/users/JdRion/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-23T15:23:45
| 2025-10-30T01:17:13
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: Linux-5.15.92-2.el8.navix.ncc.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.5.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@3outeille @ArthurZucker
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I’m using the official example script from the Accelerate repository:
[examples/torch_native_parallelism/nd_parallel_trainer.py](https://github.com/huggingface/accelerate/blob/main/examples/torch_native_parallelism/nd_parallel_trainer.py).
I only made two small changes to the example:
1. Changed the model name to GPT-OSS-20B
2. Added a quantization configuration
```
quantization_config = Mxfp4Config(dequantize=True)
model = AutoModelForCausalLM.from_pretrained(args.model_name, quantization_config=quantization_config, use_cache=False, **model_kwargs)
```
```
distributed_type: FSDP
mixed_precision: bf16
fsdp_config:
fsdp_activation_checkpointing: false
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_cpu_ram_efficient_loading: false
fsdp_offload_params: false
fsdp_reshard_after_forward: true
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_version: 2
parallelism_config:
parallelism_config_cp_size: 1
parallelism_config_dp_replicate_size: 1
parallelism_config_dp_shard_size: 16
parallelism_config_tp_size: 8
```
### Expected behavior
When training the GPT-OSS 20B model with Tensor Parallel (TP) and FSDP2, a runtime error occurs inside the tensor parallel integration code of transformers.
The error trace shows an IndexError: tuple index out of range raised from transformers/integrations/tensor_parallel.py during forward pass (specifically in _prepare_input_fn).
```
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 2328, in train
[rank2]: return inner_training_loop(
[rank2]: ^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 2672, in _inner_training_loop
[rank2]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/trl/trainer/sft_trainer.py", line 1161, in training_step
[rank2]: return super().training_step(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 4009, in training_step
[rank2]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/trl/trainer/sft_trainer.py", line 1079, in compute_loss
[rank2]: (loss, outputs) = super().compute_loss(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 4099, in compute_loss
[rank2]: outputs = model(**inputs)
[rank2]: ^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1879, in _call_impl
[rank2]: return inner()
[rank2]: ^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1827, in inner
[rank2]: result = forward_call(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/utils/generic.py", line 940, in wrapper
[rank2]: output = func(self, *args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/models/gpt_oss/modeling_gpt_oss.py", line 663, in forward
[rank2]: outputs: MoeModelOutputWithPast = self.model(
[rank2]: ^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/utils/generic.py", line 1064, in wrapper
[rank2]: outputs = func(self, *args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/models/gpt_oss/modeling_gpt_oss.py", line 502, in forward
[rank2]: hidden_states = decoder_layer(
[rank2]: ^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/modeling_layers.py", line 94, in __call__
[rank2]: return super().__call__(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1879, in _call_impl
[rank2]: return inner()
[rank2]: ^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1827, in inner
[rank2]: result = forward_call(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
[rank2]: return func(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/models/gpt_oss/modeling_gpt_oss.py", line 366, in forward
[rank2]: hidden_states, _ = self.self_attn(
[rank2]: ^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1784, in _call_impl
[rank2]: return forward_call(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/distributed/algorithms/_checkpoint/checkpoint_wrapper.py", line 171, in forward
[rank2]: return self.checkpoint_fn( # type: ignore[misc]
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_compile.py", line 53, in inner
[rank2]: return disable_fn(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py", line 929, in _fn
[rank2]: return fn(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/utils/checkpoint.py", line 495, in checkpoint
[rank2]: ret = function(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
[rank2]: return self._call_impl(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1879, in _call_impl
[rank2]: return inner()
[rank2]: ^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1816, in inner
[rank2]: args_result = hook(self, args)
[rank2]: ^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/integrations/tensor_parallel.py", line 381, in <lambda>
[rank2]: module.register_forward_pre_hook(lambda mod, inputs: input_fn(mod, inputs, device_mesh))
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/usr/local/lib/python3.12/dist-packages/transformers/integrations/tensor_parallel.py", line 510, in _prepare_input_fn
[rank2]: input_tensor = inputs[0]
[rank2]: ~~~~~~^^^
[rank2]: IndexError: tuple index out of range
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41819/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41818
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41818/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41818/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41818/events
|
https://github.com/huggingface/transformers/pull/41818
| 3,545,171,317
|
PR_kwDOCUB6oc6vSq1N
| 41,818
|
:rotating_light: Implement gradient checkpointing in GPTBigCode
|
{
"login": "githubnemo",
"id": 264196,
"node_id": "MDQ6VXNlcjI2NDE5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/264196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubnemo",
"html_url": "https://github.com/githubnemo",
"followers_url": "https://api.github.com/users/githubnemo/followers",
"following_url": "https://api.github.com/users/githubnemo/following{/other_user}",
"gists_url": "https://api.github.com/users/githubnemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubnemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubnemo/subscriptions",
"organizations_url": "https://api.github.com/users/githubnemo/orgs",
"repos_url": "https://api.github.com/users/githubnemo/repos",
"events_url": "https://api.github.com/users/githubnemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubnemo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T15:17:23
| 2025-10-29T17:32:01
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41818",
"html_url": "https://github.com/huggingface/transformers/pull/41818",
"diff_url": "https://github.com/huggingface/transformers/pull/41818.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41818.patch",
"merged_at": null
}
|
Support for gradient checkpointing was lost in the major refactoring in PR #38635 and this is the attempt to re-add it.
I extended the tests to
- test `use_reentrant=True` and `False`
- make sure `model.train` is called so that gradient checkpointing works; this is a limiation of the tests currently used by GPTBigCode
- make sure that one (the first) gradient checkpointing layer is called
- make sure that the same non-zero grads are there for normal and checkpointing runs - this is something we tripped over before in PEFT due to the possibly incompletely stored runtime environment in the checkpointed forward step, see also peft#2826
Note that the invocation of `GPTBigCodeBlock.forward` has changed:
- `layer_past` is now passed as a keyword argument so that `GradientCheckpointingLayer.__call__` can see and filter this parameter (`use_reentrant=False` fails otherwise)
- `{encoder_}hidden_states` are still passed as positional arguments so that `torch.utils.checkpoint.checkpoint` receives them as pos. args and computes gradients for these (kwargs would be filtered by `GradientCheckpointingLayer`).
:rotating_light: Note that this is breaking compatibility by changing the forward signature in `GPTBigCodeBlock.forward`!
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41818/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41817
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41817/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41817/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41817/events
|
https://github.com/huggingface/transformers/pull/41817
| 3,545,014,762
|
PR_kwDOCUB6oc6vSIk5
| 41,817
|
add fuyu fast image processors
|
{
"login": "DeXtAr47-oss",
"id": 79273068,
"node_id": "MDQ6VXNlcjc5MjczMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/79273068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeXtAr47-oss",
"html_url": "https://github.com/DeXtAr47-oss",
"followers_url": "https://api.github.com/users/DeXtAr47-oss/followers",
"following_url": "https://api.github.com/users/DeXtAr47-oss/following{/other_user}",
"gists_url": "https://api.github.com/users/DeXtAr47-oss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeXtAr47-oss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeXtAr47-oss/subscriptions",
"organizations_url": "https://api.github.com/users/DeXtAr47-oss/orgs",
"repos_url": "https://api.github.com/users/DeXtAr47-oss/repos",
"events_url": "https://api.github.com/users/DeXtAr47-oss/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeXtAr47-oss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T14:43:41
| 2025-10-24T16:25:08
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41817",
"html_url": "https://github.com/huggingface/transformers/pull/41817",
"diff_url": "https://github.com/huggingface/transformers/pull/41817.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41817.patch",
"merged_at": null
}
|
# What does this PR do?
This PR introduces FuyuImageProcessorFast, providing a faster alternative to the original FuyuImageProcessor by leveraging torchvision for image transformations.
Key changes include:
* Implementation of FuyuImageProcessorFast inheriting from BaseImageProcessorFast.
* Updates to tests/models/fuyu/test_image_processing_fuyu.py to include the fast processor, override save/load tests and fixed the image height and width in test_preprocess_with_tokenizer_info have been updated to values divisible by 30 (180x300), ensuring compatibility with FuyuImageProcessorFast and avoiding ValueError: image_height must be divisible by 30. All Fuyu image processing tests now pass.
* Addition of documentation for FuyuImageProcessorFast
Fixes #36978
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Was this discussed/approved via a Github issue or the forum? [Contributions Welcome] Add Fast Image Processors #36978](https://github.com/huggingface/transformers/issues/36978)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@yonigozlan
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41817/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41816
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41816/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41816/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41816/events
|
https://github.com/huggingface/transformers/issues/41816
| 3,545,001,326
|
I_kwDOCUB6oc7TTG1u
| 41,816
|
Processor saving does not work when multiple tokenizers
|
{
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-23T14:40:25
| 2025-10-28T13:23:17
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.57.1
- Platform: macOS-26.0.1-arm64-arm-64bit
- Python version: 3.12.2
- Huggingface_hub version: 0.34.3
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Currently, processors are saved with fixed names:
https://github.com/huggingface/transformers/blob/ca01fe4d133a498a81bdfd335024d14801e106a2/src/transformers/utils/__init__.py#L272-L278
That means, that if you create a processor that uses two of the same kind of subprocessors (for example, a byte level tokenizer and a BPE tokenizer, or two image processors, etc), they override eachother, because they use the same file name.
```py
import tempfile
from transformers import ProcessorMixin, AutoTokenizer, PreTrainedTokenizer
class OtherProcessor(ProcessorMixin):
name = "other-processor"
attributes = [
"tokenizer1",
"tokenizer2",
]
tokenizer1_class = "AutoTokenizer"
tokenizer2_class = "AutoTokenizer"
def __init__(self,
tokenizer1: PreTrainedTokenizer,
tokenizer2: PreTrainedTokenizer
):
super().__init__(tokenizer1=tokenizer1,
tokenizer2=tokenizer2)
tokenizer1 = AutoTokenizer.from_pretrained("google/gemma-3-270m")
tokenizer2 = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-1.7B")
processor = OtherProcessor(tokenizer1=tokenizer1,
tokenizer2=tokenizer2)
with tempfile.TemporaryDirectory() as temp_dir:
# Save processor
processor.save_pretrained(save_directory=temp_dir, push_to_hub=False)
# Load processor
new_processor = OtherProcessor.from_pretrained(temp_dir)
assert processor.tokenizer1.__class__ != processor.tokenizer2.__class__ # passes
assert new_processor.tokenizer1.__class__ != new_processor.tokenizer2.__class__ # fails
```
### Expected behavior
You should be able to use multiple processors within a processor.
|
{
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41816/timeline
| null |
reopened
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41815
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41815/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41815/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41815/events
|
https://github.com/huggingface/transformers/pull/41815
| 3,544,791,909
|
PR_kwDOCUB6oc6vRYAs
| 41,815
|
further reducing flakiness in `utils/check_bad_commit.py` (#41658)
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T13:48:53
| 2025-10-24T09:36:04
| 2025-10-24T09:36:01
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41815",
"html_url": "https://github.com/huggingface/transformers/pull/41815",
"diff_url": "https://github.com/huggingface/transformers/pull/41815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41815.patch",
"merged_at": "2025-10-24T09:36:01"
}
|
# What does this PR do?
In #41690, we run each test (on each git bisect commit) 4 times to avoid the false positive being reported to specific team member.
However, when we check if a test passes on the commit of the current CI run (in the last job of checking bad commit) that failed in the same current run in the usual job for that model, we only needs 1 pass among 4 runs to conclude it's flaky.
Prior to this PR, it requires the 4 runs all passing to say "it pass on the check time" and conclude it's flaky which is incorrect, and this leads the false positive being still reported (even with less chance).
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41815/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41814
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41814/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41814/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41814/events
|
https://github.com/huggingface/transformers/issues/41814
| 3,544,638,730
|
I_kwDOCUB6oc7TRuUK
| 41,814
|
AutoModel does not support Qwen3VLMoE
|
{
"login": "AylinAkkus",
"id": 110031965,
"node_id": "U_kgDOBo70XQ",
"avatar_url": "https://avatars.githubusercontent.com/u/110031965?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AylinAkkus",
"html_url": "https://github.com/AylinAkkus",
"followers_url": "https://api.github.com/users/AylinAkkus/followers",
"following_url": "https://api.github.com/users/AylinAkkus/following{/other_user}",
"gists_url": "https://api.github.com/users/AylinAkkus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AylinAkkus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AylinAkkus/subscriptions",
"organizations_url": "https://api.github.com/users/AylinAkkus/orgs",
"repos_url": "https://api.github.com/users/AylinAkkus/repos",
"events_url": "https://api.github.com/users/AylinAkkus/events{/privacy}",
"received_events_url": "https://api.github.com/users/AylinAkkus/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-23T13:10:29
| 2025-10-24T11:14:43
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 5.0.0.dev0
- Platform: Linux-5.14.0-570.42.2.el9_6.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.18
- Huggingface_hub version: 1.0.0.rc6
- Safetensors version: 0.5.3
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.18.0
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: Quadro RTX 8000
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
AutoModel does not correctly load a model of type Qwen3VLMoeModel despite downloading the latest version via:
`pip install git+https://github.com/huggingface/transformers/.git`
```
from transformers import AutoModel, AutoProcessor
MODEL_PATH = "Qwen/Qwen3-VL-30B-A3B-Instruct"
model = AutoModel.from_pretrained(
MODEL_PATH
)
```
This doesn't work, however the following works:
```
from transformers import Qwen3VLMoeForConditionalGeneration, AutoProcessor
MODEL_PATH = "Qwen/Qwen3-VL-30B-A3B-Instruct"
model = Qwen3VLMoeForConditionalGeneration.from_pretrained(
MODEL_PATH
)
```
### Expected behavior
I would expect the model to load, however I get this:
```
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 470.76it/s]
Some weights of Qwen3VLMoeModel were not initialized from the model checkpoint at Qwen/Qwen3-VL-30B-A3B-Instruct and are newly initialized: ['language_model.embed_tokens.weight', 'language_model.layers.{0...47}.input_layernorm.weight', 'language_model.layers.{0...47}.mlp.experts.down_proj', 'language_model.layers.{0...47}.mlp.experts.gate_up_proj', 'language_model.layers.{0...47}.mlp.gate.weight', 'language_model.layers.{0...47}.post_attention_layernorm.weight', 'language_model.layers.{0...47}.self_attn.k_norm.weight', 'language_model.layers.{0...47}.self_attn.k_proj.weight', 'language_model.layers.{0...47}.self_attn.o_proj.weight', 'language_model.layers.{0...47}.self_attn.q_norm.weight', 'language_model.layers.{0...47}.self_attn.q_proj.weight', 'language_model.layers.{0...47}.self_attn.v_proj.weight', 'language_model.norm.weight', 'visual.blocks.{0...26}.attn.proj.bias', 'visual.blocks.{0...26}.attn.proj.weight', 'visual.blocks.{0...26}.attn.qkv.bias', 'visual.blocks.{0...26}.attn.qkv.weight', 'visual.blocks.{0...26}.mlp.linear_f.{1, 2}.bias', 'visual.blocks.{0...26}.mlp.linear_f.{1, 2}.weight', 'visual.blocks.{0...26}.nor.{1, 2}.bias', 'visual.blocks.{0...26}.nor.{1, 2}.weight', 'visual.deepstack_merger_list.{0, 1, 2}.linear_f.{1, 2}.bias', 'visual.deepstack_merger_list.{0, 1, 2}.linear_f.{1, 2}.weight', 'visual.deepstack_merger_list.{0, 1, 2}.norm.bias', 'visual.deepstack_merger_list.{0, 1, 2}.norm.weight', 'visual.merger.linear_f.{1, 2}.bias', 'visual.merger.linear_f.{1, 2}.weight', 'visual.merger.norm.bias', 'visual.merger.norm.weight', 'visual.patch_embed.proj.bias', 'visual.patch_embed.proj.weight', 'visual.pos_embed.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41814/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41813
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41813/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41813/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41813/events
|
https://github.com/huggingface/transformers/issues/41813
| 3,544,543,682
|
I_kwDOCUB6oc7TRXHC
| 41,813
|
Hugginface transformers downloading models doesnt work!
|
{
"login": "lucian-student",
"id": 56319974,
"node_id": "MDQ6VXNlcjU2MzE5OTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/56319974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucian-student",
"html_url": "https://github.com/lucian-student",
"followers_url": "https://api.github.com/users/lucian-student/followers",
"following_url": "https://api.github.com/users/lucian-student/following{/other_user}",
"gists_url": "https://api.github.com/users/lucian-student/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucian-student/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucian-student/subscriptions",
"organizations_url": "https://api.github.com/users/lucian-student/orgs",
"repos_url": "https://api.github.com/users/lucian-student/repos",
"events_url": "https://api.github.com/users/lucian-student/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucian-student/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T12:46:18
| 2025-10-29T02:31:38
| 2025-10-24T10:50:20
|
NONE
| null | null | null | null |
### System Info
Basically it just doesn't work any model i use i get this:
```
2025-10-23 12:44:00.017829: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1761223440.296880 80 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1761223440.385113 80 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-10-23 12:44:18.303400: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
INFO:2025-10-23 12:44:19,038:jax._src.xla_bridge:924: Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig'
INFO:2025-10-23 12:44:19,054:jax._src.xla_bridge:924: Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.53.3
- Platform: Linux-6.6.56+-x86_64-with-glibc2.35
- Python version: 3.11.13
- Huggingface_hub version: 1.0.0.rc2
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (NA)
- Tensorflow version (GPU?): 2.18.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.10.6 (cpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Using distributed or parallel set-up in script?: <fill in>
```
```
Version '4.53.3'
```
```
---------------------------------------------------------------------------
HTTPStatusError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)
556 try:
--> 557 response.raise_for_status()
558 except httpx.HTTPStatusError as e:
/usr/local/lib/python3.11/dist-packages/httpx/_models.py in raise_for_status(self)
828 message = message.format(self, error_type=error_type)
--> 829 raise HTTPStatusError(message, request=request, response=self)
830
HTTPStatusError: Client error '404 Not Found' for url 'https://huggingface.co/api/models/Qwen/Qwen3-Embedding-0.6B/tree/main/additional_chat_templates?recursive=false&expand=false'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
The above exception was the direct cause of the following exception:
RemoteEntryNotFoundError Traceback (most recent call last)
/tmp/ipykernel_37/2822320637.py in <cell line: 0>()
----> 1 tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B', padding_side='left')
2 #model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct')
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
1048 f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported."
1049 )
-> 1050 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
1051
1052 # Otherwise we have to be creative.
/usr/local/lib/python3.11/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, cache_dir, force_download, local_files_only, token, revision, trust_remote_code, *init_inputs, **kwargs)
1955 )
1956 else:
-> 1957 for template in list_repo_templates(
1958 pretrained_model_name_or_path,
1959 local_files_only=local_files_only,
/usr/local/lib/python3.11/dist-packages/transformers/utils/hub.py in list_repo_templates(repo_id, local_files_only, revision, cache_dir)
159 if not local_files_only:
160 try:
--> 161 return [
162 entry.path.removeprefix(f"{CHAT_TEMPLATE_DIR}/")
163 for entry in list_repo_tree(
/usr/local/lib/python3.11/dist-packages/transformers/utils/hub.py in <listcomp>(.0)
159 if not local_files_only:
160 try:
--> 161 return [
162 entry.path.removeprefix(f"{CHAT_TEMPLATE_DIR}/")
163 for entry in list_repo_tree(
/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_api.py in list_repo_tree(self, repo_id, path_in_repo, recursive, expand, revision, repo_type, token)
3050 encoded_path_in_repo = "/" + quote(path_in_repo, safe="") if path_in_repo else ""
3051 tree_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/tree/{revision}{encoded_path_in_repo}"
-> 3052 for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}):
3053 yield (RepoFile(**path_info) if path_info["type"] == "file" else RepoFolder(**path_info))
3054
/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_pagination.py in paginate(path, params, headers)
35 session = get_session()
36 r = session.get(path, params=params, headers=headers)
---> 37 hf_raise_for_status(r)
38 yield from r.json()
39
/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)
569 elif error_code == "EntryNotFound":
570 message = f"{response.status_code} Client Error." + "\n\n" + f"Entry Not Found for url: {response.url}."
--> 571 raise _format(RemoteEntryNotFoundError, message, response) from e
572
573 elif error_code == "GatedRepo":
RemoteEntryNotFoundError: 404 Client Error. (Request ID: Root=1-68fa2240-68b5fae228e82ffa137fd62f;5f0b9cd4-e975-4d6d-b60b-42eba4e50412)
Entry Not Found for url: https://huggingface.co/api/models/Qwen/Qwen3-Embedding-0.6B/tree/main/additional_chat_templates?recursive=false&expand=false.
additional_chat_templates does not exist on "main"
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B', padding_side='left')
### Expected behavior
Expected behaviour is downloading tokenizer.
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41813/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41812
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41812/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41812/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41812/events
|
https://github.com/huggingface/transformers/pull/41812
| 3,544,480,218
|
PR_kwDOCUB6oc6vQUWf
| 41,812
|
Fix invalid examples in QwenVL model docstrings and add Qwen3VL example
|
{
"login": "Xqle",
"id": 87457840,
"node_id": "MDQ6VXNlcjg3NDU3ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/87457840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Xqle",
"html_url": "https://github.com/Xqle",
"followers_url": "https://api.github.com/users/Xqle/followers",
"following_url": "https://api.github.com/users/Xqle/following{/other_user}",
"gists_url": "https://api.github.com/users/Xqle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Xqle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Xqle/subscriptions",
"organizations_url": "https://api.github.com/users/Xqle/orgs",
"repos_url": "https://api.github.com/users/Xqle/repos",
"events_url": "https://api.github.com/users/Xqle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Xqle/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T12:28:17
| 2025-10-29T12:38:13
| 2025-10-29T12:34:13
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41812",
"html_url": "https://github.com/huggingface/transformers/pull/41812",
"diff_url": "https://github.com/huggingface/transformers/pull/41812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41812.patch",
"merged_at": "2025-10-29T12:34:13"
}
|
# What does this PR do?
This PR fixes the non-functional examples for Qwen2-VL and Qwen2.5-VL models, and adds a runnable example for Qwen3VL.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp, @Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41812/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41811
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41811/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41811/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41811/events
|
https://github.com/huggingface/transformers/pull/41811
| 3,544,028,495
|
PR_kwDOCUB6oc6vOyiM
| 41,811
|
Add a safeguard around a flaky test in gemma2
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T10:11:05
| 2025-10-23T10:36:52
| 2025-10-23T10:36:50
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41811",
"html_url": "https://github.com/huggingface/transformers/pull/41811",
"diff_url": "https://github.com/huggingface/transformers/pull/41811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41811.patch",
"merged_at": "2025-10-23T10:36:50"
}
|
The test `tests/models/gemma2/test_modeling_gemma2.py::Gemma2IntegrationTest::test_model_2b_pipeline_bf16_flex_attention` is flaky because of the `_compile` flag at https://github.com/huggingface/transformers/blob/fe11cbb808b4301399240e40c5cb6cca9bb00d4d/src/transformers/masking_utils.py#L675
The issue is solved when upgrading torch to nightly 2.10, so it makes little sense to change the code.
But when the test causes an error, it causes all subsequent tests to fail. So this PR adds the `@run_test_using_subprocess` decorator around the test so this does not happen anymore. And a TODO to remove this once the fix is in the stable version of torch.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41811/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41810
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41810/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41810/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41810/events
|
https://github.com/huggingface/transformers/issues/41810
| 3,543,743,446
|
I_kwDOCUB6oc7TOTvW
| 41,810
|
How do you use t5gemma decoder with a different encoder?
|
{
"login": "kushaltatariya",
"id": 103580859,
"node_id": "U_kgDOBiyEuw",
"avatar_url": "https://avatars.githubusercontent.com/u/103580859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushaltatariya",
"html_url": "https://github.com/kushaltatariya",
"followers_url": "https://api.github.com/users/kushaltatariya/followers",
"following_url": "https://api.github.com/users/kushaltatariya/following{/other_user}",
"gists_url": "https://api.github.com/users/kushaltatariya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushaltatariya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushaltatariya/subscriptions",
"organizations_url": "https://api.github.com/users/kushaltatariya/orgs",
"repos_url": "https://api.github.com/users/kushaltatariya/repos",
"events_url": "https://api.github.com/users/kushaltatariya/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushaltatariya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T08:48:19
| 2025-10-23T09:20:00
| null |
NONE
| null | null | null | null |
I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`.
Here is the code:
```
model_1 = "WikiQuality/pre_filtered.am"
model_2 = "google/t5gemma-2b-2b-ul2"
encoder = AutoModel.from_pretrained(model_1)
decoder = AutoModel.from_pretrained(model_2, dtype=torch.bfloat16)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
```
The above code raises the error:
```
AttributeError: 'T5GemmaConfig' object has no attribute 'hidden_size'
```
From this I understand that `hidden_size` is accesible from `decoder.config.decoder.hidden_size` and not `decoder.config.hidden_size`, which is where EncoderDecoderModel is looking. So I change my code to load the encoder-decoder model to this:
```
model = EncoderDecoderModel(encoder=encoder, decoder=decoder.decoder)
```
This gives me the following error:
```
ValueError: Unrecognized model identifier: t5_gemma_module. Should contain one of aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, time_series_transformer, timesfm, timesformer, timm_backbone, timm_wrapper, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, vaultgemma, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vitpose, vitpose_backbone, vits, vivit, vjepa2, voxtral, voxtral_encoder, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xcodec, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xlstm, xmod, yolos, yoso, zamba, zamba2, zoedepth
```
So basically, trying to load only the decoder for t5gemma returns a t5_gemma_module which is not supported by EncoderDecoderModel, and not t5gemma which is supported. Is there a workaround for this?
`transformers 4.57.0`
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41810/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41809
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41809/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41809/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41809/events
|
https://github.com/huggingface/transformers/issues/41809
| 3,543,653,224
|
I_kwDOCUB6oc7TN9to
| 41,809
|
[Bug] Qwen3-VL beam search with video inputs.
|
{
"login": "rzhao-zhsq",
"id": 56222328,
"node_id": "MDQ6VXNlcjU2MjIyMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/56222328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rzhao-zhsq",
"html_url": "https://github.com/rzhao-zhsq",
"followers_url": "https://api.github.com/users/rzhao-zhsq/followers",
"following_url": "https://api.github.com/users/rzhao-zhsq/following{/other_user}",
"gists_url": "https://api.github.com/users/rzhao-zhsq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rzhao-zhsq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rzhao-zhsq/subscriptions",
"organizations_url": "https://api.github.com/users/rzhao-zhsq/orgs",
"repos_url": "https://api.github.com/users/rzhao-zhsq/repos",
"events_url": "https://api.github.com/users/rzhao-zhsq/events{/privacy}",
"received_events_url": "https://api.github.com/users/rzhao-zhsq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-23T08:19:25
| 2025-10-23T12:27:04
| null |
NONE
| null | null | null | null |
### System Info
Inference with Qwen3-VL, num beam > 1 and video inputs failed:
```
[rank0]: File "/xx/python/transformers/src/transformers/trainer_seq2seq.py", line 255, in predict
[rank0]: return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/trainer.py", line 4567, in predict
[rank0]: output = eval_loop(
[rank0]: ^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/trainer.py", line 4685, in evaluation_loop
[rank0]: losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/LLaMA-Factory-latest/src/llamafactory/train/sft/trainer.py", line 137, in prediction_step
[rank0]: loss, generated_tokens, _ = super().prediction_step(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/trainer_seq2seq.py", line 327, in prediction_step
[rank0]: generated_tokens = self.model.generate(**generation_inputs, **gen_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/miniconda3/envs/vlm/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/generation/utils.py", line 2482, in generate
[rank0]: input_ids, model_kwargs = self._expand_inputs_for_generation(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/models/qwen3_vl/modeling_qwen3_vl.py", line 1540, in _expand_inputs_for_generation
[rank0]: model_kwargs = _expand_dict_for_generation_visual(model_kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/python/transformers/src/transformers/models/qwen3_vl/modeling_qwen3_vl.py", line 1513, in _expand_dict_for_generation_visual
[rank0]: samples = torch.split(video_grid_thw, list(video_nums))
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/miniconda3/envs/vlm/lib/python3.11/site-packages/torch/functional.py", line 222, in split
[rank0]: return tensor.split(split_size_or_sections, dim)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/xx/miniconda3/envs/vlm/lib/python3.11/site-packages/torch/_tensor.py", line 1052, in split
[rank0]: return torch._VF.split_with_sizes(self, split_size, dim)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: RuntimeError: split_with_sizes expects split_sizes to sum exactly to 2 (input tensor's size at dimension 0), but got split_sizes=[8, 7]
```
It seems that the ```vision_start_token``` insertion in Qwen3-VL is different from Qwen2-VL, i.e, one ```vision_start_token``` for each frame in videos:
https://github.com/huggingface/transformers/blob/87be5595081364ef99393feeaa60d71db3652679/src/transformers/models/qwen3_vl/processing_qwen3_vl.py#L200-L215
however, the function in Qwen3-VL ```_get_image_nums_and_video_nums``` counts the ```vision_start_token``` as video number:
https://github.com/huggingface/transformers/blob/87be5595081364ef99393feeaa60d71db3652679/src/transformers/models/qwen3_vl/modeling_qwen3_vl.py#L1491-L1493
@zucchini-nlp
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Inference with Qwen3-VL, video input, and num beams > 1.
### Expected behavior
Correctly inference.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41809/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41808
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41808/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41808/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41808/events
|
https://github.com/huggingface/transformers/pull/41808
| 3,543,626,930
|
PR_kwDOCUB6oc6vNb2D
| 41,808
|
QwenVL: add skipped keys in `setattr` as well
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T08:10:38
| 2025-10-23T08:20:26
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41808",
"html_url": "https://github.com/huggingface/transformers/pull/41808",
"diff_url": "https://github.com/huggingface/transformers/pull/41808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41808.patch",
"merged_at": null
}
|
# What does this PR do?
As per title, getattr and setattr skip different set of keys now
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41808/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41807
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41807/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41807/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41807/events
|
https://github.com/huggingface/transformers/pull/41807
| 3,543,563,642
|
PR_kwDOCUB6oc6vNOhS
| 41,807
|
git commit -m "Fix: corrected outdated documentation link in README.md"
|
{
"login": "ruheena-shaik",
"id": 214262947,
"node_id": "U_kgDODMVkow",
"avatar_url": "https://avatars.githubusercontent.com/u/214262947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruheena-shaik",
"html_url": "https://github.com/ruheena-shaik",
"followers_url": "https://api.github.com/users/ruheena-shaik/followers",
"following_url": "https://api.github.com/users/ruheena-shaik/following{/other_user}",
"gists_url": "https://api.github.com/users/ruheena-shaik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruheena-shaik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruheena-shaik/subscriptions",
"organizations_url": "https://api.github.com/users/ruheena-shaik/orgs",
"repos_url": "https://api.github.com/users/ruheena-shaik/repos",
"events_url": "https://api.github.com/users/ruheena-shaik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruheena-shaik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T07:49:12
| 2025-10-23T12:32:56
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41807",
"html_url": "https://github.com/huggingface/transformers/pull/41807",
"diff_url": "https://github.com/huggingface/transformers/pull/41807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41807.patch",
"merged_at": null
}
|
This commit fixes a broken documentation link in the README file. The old link (https://huggingface.co/docs/transformers/index) returned a 404 error. It is now updated to the correct URL: https://huggingface.co/docs/transformers.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41807/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41806
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41806/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41806/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41806/events
|
https://github.com/huggingface/transformers/pull/41806
| 3,543,487,212
|
PR_kwDOCUB6oc6vM-cA
| 41,806
|
revert `_prepare_4d_causal_attention_mask_with_cache_position` for gpt2
|
{
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T07:22:16
| 2025-10-24T01:22:51
| 2025-10-24T01:22:51
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41806",
"html_url": "https://github.com/huggingface/transformers/pull/41806",
"diff_url": "https://github.com/huggingface/transformers/pull/41806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41806.patch",
"merged_at": null
}
|
Hi @zucchini-nlp
The PR #39754 deleted `_prepare_4d_causal_attention_mask_with_cache_position` on gpt2, which caused 40% performance regression on CPU. You can reproduce it by
`numactl -C 0-7 --membind 0 python test.py`
```python
import time
import torch
from transformers import pipeline, set_seed, AutoTokenizer
set_seed(42)
model_id = "openai-community/gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.padding_side = 'left'
if tokenizer.pad_token_id is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
pipe = pipeline("text-generation", model=model_id, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="cpu")
generation_config = pipe.model.generation_config
generation_config.do_sample = False
generation_config.use_cache = True
generation_config.max_new_tokens = 128
generation_config.min_new_tokens = 128
generation_config.cache_implementation="static"
generation_config.temperature = 1.0
generation_config.top_p = 1.0
generation_config.num_beams = 1
pipe.model.config._attn_implementation="sdpa"
inputs = "It is done, and submitted. You can play 'Survival of the Tastiest' on Android, and on the web. Playing on the web works, but you have to simulate multiple touch for table moving and that can be a bit confusing. There is a lot I'd like to talk about. I will go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks which I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise - something with a lot of enemies/actors"
for _ in range(5):
set_seed(42)
pipe(inputs, generation_config=generation_config)
for _ in range(5):
set_seed(42)
start = time.time()
pipe(inputs, generation_config=generation_config)
end = time.time()
print(f"{pipe.model.dtype} time costs {(end-start)*1000} ms")
```
Revert `_prepare_4d_causal_attention_mask_with_cache_position` can fix the regression.
|
{
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41806/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41805
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41805/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41805/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41805/events
|
https://github.com/huggingface/transformers/pull/41805
| 3,543,067,667
|
PR_kwDOCUB6oc6vLu8c
| 41,805
|
make apollo test case pass
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-23T04:16:32
| 2025-10-23T16:59:38
| 2025-10-23T10:07:31
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41805",
"html_url": "https://github.com/huggingface/transformers/pull/41805",
"diff_url": "https://github.com/huggingface/transformers/pull/41805.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41805.patch",
"merged_at": "2025-10-23T10:07:31"
}
|
as in galore here https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py#L2443, same change to make apollo test case `test_apollo_lr_display_without_scheduler` pass.
@ydshieh , pls help review, thx very much.
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41805/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41805/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41804
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41804/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41804/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41804/events
|
https://github.com/huggingface/transformers/pull/41804
| 3,542,814,283
|
PR_kwDOCUB6oc6vK86b
| 41,804
|
T5 migration to new masking interface
|
{
"login": "Aravind-11",
"id": 42345018,
"node_id": "MDQ6VXNlcjQyMzQ1MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/42345018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aravind-11",
"html_url": "https://github.com/Aravind-11",
"followers_url": "https://api.github.com/users/Aravind-11/followers",
"following_url": "https://api.github.com/users/Aravind-11/following{/other_user}",
"gists_url": "https://api.github.com/users/Aravind-11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aravind-11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aravind-11/subscriptions",
"organizations_url": "https://api.github.com/users/Aravind-11/orgs",
"repos_url": "https://api.github.com/users/Aravind-11/repos",
"events_url": "https://api.github.com/users/Aravind-11/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aravind-11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-23T01:19:27
| 2025-10-29T17:33:12
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41804",
"html_url": "https://github.com/huggingface/transformers/pull/41804",
"diff_url": "https://github.com/huggingface/transformers/pull/41804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41804.patch",
"merged_at": null
}
|
# What does this PR do?
This PR migrates the T5 model to use the new masking utilities (`masking_utils.py`) for attention mask creation.
Fixes # (40743)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@vasqu @Rocketknight1
Passed all existing test cases except for test_small_integration_test which needs GPU. I need guidance on additional test cases to be added for this. Thank you.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41804/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41803
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41803/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41803/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41803/events
|
https://github.com/huggingface/transformers/issues/41803
| 3,542,743,890
|
I_kwDOCUB6oc7TKftS
| 41,803
|
torch.compile graph break in `flash_attention_v2` backend
|
{
"login": "StrongerXi",
"id": 26714592,
"node_id": "MDQ6VXNlcjI2NzE0NTky",
"avatar_url": "https://avatars.githubusercontent.com/u/26714592?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StrongerXi",
"html_url": "https://github.com/StrongerXi",
"followers_url": "https://api.github.com/users/StrongerXi/followers",
"following_url": "https://api.github.com/users/StrongerXi/following{/other_user}",
"gists_url": "https://api.github.com/users/StrongerXi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StrongerXi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StrongerXi/subscriptions",
"organizations_url": "https://api.github.com/users/StrongerXi/orgs",
"repos_url": "https://api.github.com/users/StrongerXi/repos",
"events_url": "https://api.github.com/users/StrongerXi/events{/privacy}",
"received_events_url": "https://api.github.com/users/StrongerXi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-23T00:29:22
| 2025-10-23T17:29:26
| null |
NONE
| null | null | null | null |
### Feature request
Repro:
```python
import torch
from transformers import AutoModelForCausalLM
device = "cuda"
model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
).to(device).eval()
model = torch.compile(model, fullgraph=True)
# dummy inputs; we only want logits
bsz, seqlen = 1, 128
inp = torch.randint(0, model.config.vocab_size, (bsz, seqlen), device=device)
with torch.inference_mode():
model(input_ids=inp)
```
Output:
```
Traceback (most recent call last):
File "/home/ryanguo99/repos/verl/run.py", line 20, in <module>
model(input_ids=inp)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 418, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1777, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1788, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 895, in compile_wrapper
raise e.with_traceback(None) from e.__cause__ # User compiler error
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.Unsupported: Data-dependent branching
Explanation: Detected data-dependent branching (e.g. `if my_tensor.sum() > 0:`). Dynamo does not support tracing dynamic control flow.
Hint: This graph break is fundamental - it is unlikely that Dynamo will ever be able to trace through your code. Consider finding a workaround.
Hint: Use `torch.cond` to express dynamic control flow.
Developer debug context: attempted to jump with TensorVariable()
For more details about this graph break, please visit: https://meta-pytorch.github.io/compile-graph-break-site/gb/gb0170.html
from user code:
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/utils/generic.py", line 918, in wrapper
output = func(self, *args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/models/llama/modeling_llama.py", line 459, in forward
outputs: BaseModelOutputWithPast = self.model(
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/utils/generic.py", line 1064, in wrapper
outputs = func(self, *args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/models/llama/modeling_llama.py", line 395, in forward
hidden_states = decoder_layer(
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/modeling_layers.py", line 94, in __call__
return super().__call__(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1777, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1788, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/models/llama/modeling_llama.py", line 294, in forward
hidden_states, _ = self.self_attn(
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/models/llama/modeling_llama.py", line 252, in forward
attn_output, attn_weights = attention_interface(
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/integrations/flash_attention.py", line 66, in flash_attention_forward
attn_output = _flash_attention_forward(
File "/home/ryanguo99/.conda/envs/verl-nightly/lib/python3.12/site-packages/transformers/modeling_flash_attention_utils.py", line 632, in _flash_attention_forward
elif is_fa_with_varlen_kwargs or is_fa_with_position_ids:
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
```
### Motivation
In LLM RL frameworks that uses transformers for training models, typically `attn_implementation="flash_attention_2",` is used, e.g., [verl](https://github.com/volcengine/verl/blob/f50e5c2e8f11201c9759d1103464ff1653231ab8/verl/workers/fsdp_workers.py#L318-L320), because the default SDPA backend can't route to flash attention under variable sequence length.
So this graph break negatively affects the performance of compiled model.
### Your contribution
.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41803/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41802
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41802/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41802/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41802/events
|
https://github.com/huggingface/transformers/pull/41802
| 3,542,471,047
|
PR_kwDOCUB6oc6vJ1CO
| 41,802
|
Fixed some grammar mistakes
|
{
"login": "FrogWarlord",
"id": 235806882,
"node_id": "U_kgDODg4gog",
"avatar_url": "https://avatars.githubusercontent.com/u/235806882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrogWarlord",
"html_url": "https://github.com/FrogWarlord",
"followers_url": "https://api.github.com/users/FrogWarlord/followers",
"following_url": "https://api.github.com/users/FrogWarlord/following{/other_user}",
"gists_url": "https://api.github.com/users/FrogWarlord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrogWarlord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrogWarlord/subscriptions",
"organizations_url": "https://api.github.com/users/FrogWarlord/orgs",
"repos_url": "https://api.github.com/users/FrogWarlord/repos",
"events_url": "https://api.github.com/users/FrogWarlord/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrogWarlord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T21:45:22
| 2025-10-23T12:40:28
| 2025-10-23T12:39:58
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41802",
"html_url": "https://github.com/huggingface/transformers/pull/41802",
"diff_url": "https://github.com/huggingface/transformers/pull/41802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41802.patch",
"merged_at": "2025-10-23T12:39:58"
}
|
Added spaces between words, fixed a typo and other errors
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41802/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41801
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41801/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41801/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41801/events
|
https://github.com/huggingface/transformers/pull/41801
| 3,542,469,312
|
PR_kwDOCUB6oc6vJ0qZ
| 41,801
|
SDPA and FlashAttention-2 support for LayoutLMv3
|
{
"login": "jackiehimel",
"id": 142959852,
"node_id": "U_kgDOCIVk7A",
"avatar_url": "https://avatars.githubusercontent.com/u/142959852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackiehimel",
"html_url": "https://github.com/jackiehimel",
"followers_url": "https://api.github.com/users/jackiehimel/followers",
"following_url": "https://api.github.com/users/jackiehimel/following{/other_user}",
"gists_url": "https://api.github.com/users/jackiehimel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackiehimel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackiehimel/subscriptions",
"organizations_url": "https://api.github.com/users/jackiehimel/orgs",
"repos_url": "https://api.github.com/users/jackiehimel/repos",
"events_url": "https://api.github.com/users/jackiehimel/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackiehimel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T21:44:23
| 2025-10-29T04:12:53
| 2025-10-29T04:12:17
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41801",
"html_url": "https://github.com/huggingface/transformers/pull/41801",
"diff_url": "https://github.com/huggingface/transformers/pull/41801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41801.patch",
"merged_at": null
}
|
# What does this PR do?
Adds SDPA and FlashAttention-2 support to LayoutLMv3 following the same pattern as other models. Fully backward compatible.
SDPA converts masks to boolean format. FA2 uses `_upad_input` for variable-length sequences and avoids redundant unpads. Both fall back gracefully when needed. FA2 is O(N) memory vs O(N²).
Fixes #35467
# Changes
- Added `LayoutLMv3SdpaAttention` using `torch.nn.functional.scaled_dot_product_attention`
- Added `LayoutLMv3FlashAttention2` with `flash_attn_func` / `flash_attn_varlen_func`
- Both inherit from `LayoutLMv3Attention`
- Fallback to standard attention when backends unavailable or `output_attentions=True` / relative position bias is used
# Testing
- 121 tests passed in `test_modeling_layoutlmv3.py`
- Manually verified forward passes with/without attention masks
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@vasqu @ArthurZucker @CyrilVallez
|
{
"login": "jackiehimel",
"id": 142959852,
"node_id": "U_kgDOCIVk7A",
"avatar_url": "https://avatars.githubusercontent.com/u/142959852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackiehimel",
"html_url": "https://github.com/jackiehimel",
"followers_url": "https://api.github.com/users/jackiehimel/followers",
"following_url": "https://api.github.com/users/jackiehimel/following{/other_user}",
"gists_url": "https://api.github.com/users/jackiehimel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackiehimel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackiehimel/subscriptions",
"organizations_url": "https://api.github.com/users/jackiehimel/orgs",
"repos_url": "https://api.github.com/users/jackiehimel/repos",
"events_url": "https://api.github.com/users/jackiehimel/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackiehimel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41801/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41800
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41800/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41800/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41800/events
|
https://github.com/huggingface/transformers/pull/41800
| 3,542,467,812
|
PR_kwDOCUB6oc6vJ0Vx
| 41,800
|
Increasing clarity
|
{
"login": "FrogWarlord",
"id": 235806882,
"node_id": "U_kgDODg4gog",
"avatar_url": "https://avatars.githubusercontent.com/u/235806882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrogWarlord",
"html_url": "https://github.com/FrogWarlord",
"followers_url": "https://api.github.com/users/FrogWarlord/followers",
"following_url": "https://api.github.com/users/FrogWarlord/following{/other_user}",
"gists_url": "https://api.github.com/users/FrogWarlord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrogWarlord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrogWarlord/subscriptions",
"organizations_url": "https://api.github.com/users/FrogWarlord/orgs",
"repos_url": "https://api.github.com/users/FrogWarlord/repos",
"events_url": "https://api.github.com/users/FrogWarlord/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrogWarlord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T21:43:31
| 2025-10-22T21:43:31
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41800",
"html_url": "https://github.com/huggingface/transformers/pull/41800",
"diff_url": "https://github.com/huggingface/transformers/pull/41800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41800.patch",
"merged_at": null
}
|
added some missing things to increase clarity
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41800/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41799
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41799/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41799/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41799/events
|
https://github.com/huggingface/transformers/pull/41799
| 3,542,466,391
|
PR_kwDOCUB6oc6vJ0CO
| 41,799
|
Fixed grammar mistakes
|
{
"login": "FrogWarlord",
"id": 235806882,
"node_id": "U_kgDODg4gog",
"avatar_url": "https://avatars.githubusercontent.com/u/235806882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrogWarlord",
"html_url": "https://github.com/FrogWarlord",
"followers_url": "https://api.github.com/users/FrogWarlord/followers",
"following_url": "https://api.github.com/users/FrogWarlord/following{/other_user}",
"gists_url": "https://api.github.com/users/FrogWarlord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrogWarlord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrogWarlord/subscriptions",
"organizations_url": "https://api.github.com/users/FrogWarlord/orgs",
"repos_url": "https://api.github.com/users/FrogWarlord/repos",
"events_url": "https://api.github.com/users/FrogWarlord/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrogWarlord/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T21:42:43
| 2025-10-23T12:34:33
| 2025-10-23T12:34:02
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41799",
"html_url": "https://github.com/huggingface/transformers/pull/41799",
"diff_url": "https://github.com/huggingface/transformers/pull/41799.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41799.patch",
"merged_at": "2025-10-23T12:34:02"
}
|
fixed a couple grammar mistakes
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41799/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41798
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41798/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41798/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41798/events
|
https://github.com/huggingface/transformers/pull/41798
| 3,542,407,849
|
PR_kwDOCUB6oc6vJnW_
| 41,798
|
p-less Sampling: A Robust Hyperparameter-Free Approach for LLM Decoding
|
{
"login": "ryttry",
"id": 22166263,
"node_id": "MDQ6VXNlcjIyMTY2MjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/22166263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryttry",
"html_url": "https://github.com/ryttry",
"followers_url": "https://api.github.com/users/ryttry/followers",
"following_url": "https://api.github.com/users/ryttry/following{/other_user}",
"gists_url": "https://api.github.com/users/ryttry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryttry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryttry/subscriptions",
"organizations_url": "https://api.github.com/users/ryttry/orgs",
"repos_url": "https://api.github.com/users/ryttry/repos",
"events_url": "https://api.github.com/users/ryttry/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryttry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T21:15:45
| 2025-10-28T12:16:57
| null |
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41798",
"html_url": "https://github.com/huggingface/transformers/pull/41798",
"diff_url": "https://github.com/huggingface/transformers/pull/41798.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41798.patch",
"merged_at": null
}
|
# What does this PR do?
This PR contributes the _p_-less and _p_-less<sub>norm</sub> sampling methods for LLM decoding to the `model.generate` endpoint, the standard endpoint used for all other sampling methods such as top-_k_, top-_p_, etc. Like the other sampling methods, logits warpers are also created for _p_-less (`PLessLogitsWarper`) and _p_-less<sub>norm</sub> (`PLessNormLogitsWarper`).
Reference:
For details, refer to the paper "_p_-less Sampling: A Robust Hyperparameter-Free Approach for LLM Decoding", available at https://arxiv.org/abs/2509.23234 (NeurIPS 2025)
TLDR; The _p_-less sampling method (and _p_-less<sub>norm</sub>) is hyperparameter-free, considers the full token distribution in determining the probability threshold for admitting tokens into the sampling set, is robust to high temperatures, and behaves befittingly with the entropy of the distribution, i.e. admitting more tokens into the sampling set when the entropy is high and vice versa.
This PR does not introduce any new dependency.
Documentation and code comments are written for the `PLessLogitsWarper` and `PLessNormLogitsWarper` classes.
## Tests
- Tests for directly using the p-less and p-less-norm logits warpers are in `tests/generation/test_logits_process.py`
- Tests on the `model.generate` endpoint using p-less and p-less-norm logits warpers are written in `tests/generation/test_utils.py`
- Tests on the generation configuration for the `p_less` and `p_less_norm` arguments are written in `tests/generation/test_configuration_utils.py`
Tests all passed locally.
@zucchini-nlp @gante , looking forward to the review
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41798/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41797
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41797/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41797/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41797/events
|
https://github.com/huggingface/transformers/pull/41797
| 3,542,377,367
|
PR_kwDOCUB6oc6vJg0-
| 41,797
|
Add deepseek ocr
|
{
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T21:04:03
| 2025-10-28T18:33:52
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41797",
"html_url": "https://github.com/huggingface/transformers/pull/41797",
"diff_url": "https://github.com/huggingface/transformers/pull/41797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41797.patch",
"merged_at": null
}
|
# What does this PR do?
As per title. Architecturally: Llava-next used as skeleton with a modified SamModel and a modified ClipVisionModel, keeping the deepseekV2 decoder untouched (using AutoModel) and changing using config only.
- [x] Working config + random weights init
- [x] Modular draft with subconfigs (two vision configs)
- [x] Conversion from original checkpoint done
- [x] Modular model finished
- [x] Integration tests/OCR tests working as in original codebase
- [x] Make modular slimmer
- [ ] Make processor faster
- [ ] Complete test suite for `transformers`
- [ ] Remap weights to avoid conversion / on-the-fly conversion? (cc @ArthurZucker )
Current branch is functional. You can convert the weights and run the following on your image and you'll get a nice OCR output.
```python
import torch
from PIL import Image
from transformers import DeepseekOcrForConditionalGeneration, DeepseekOcrProcessor
from transformers import model_addition_debugger_context
processor = DeepseekOcrProcessor.from_pretrained("deepseek_ocr_converted")
model = DeepseekOcrForConditionalGeneration.from_pretrained("deepseek_ocr_converted", dtype=torch.bfloat16)
image = Image.open("handwritten_letter_small.png").convert("RGB")
conversation = [
{
"role": "<|User|>",
"content": [
{"type": "image", "path": "./handwritten_letter_small.png"},
{"type": "text", "text": "<|grounding|>Convert the document to markdown."},
],
}
]
inputs = processor.apply_chat_template(
conversation,
return_dict=True,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
with torch.no_grad():
generated = model.generate(**inputs, max_new_tokens=50)
text = processor.batch_decode(generated, skip_special_tokens=False)[0]
print(text.strip())
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41797/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41797/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41796
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41796/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41796/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41796/events
|
https://github.com/huggingface/transformers/pull/41796
| 3,541,770,015
|
PR_kwDOCUB6oc6vHeKl
| 41,796
|
make lfm2_moe integration test pass on XPU
|
{
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T17:47:34
| 2025-10-28T17:59:43
| 2025-10-28T14:50:19
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41796",
"html_url": "https://github.com/huggingface/transformers/pull/41796",
"diff_url": "https://github.com/huggingface/transformers/pull/41796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41796.patch",
"merged_at": "2025-10-28T14:50:19"
}
|
@ydshieh , pls help review, thx very much
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41796/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41795
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41795/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41795/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41795/events
|
https://github.com/huggingface/transformers/pull/41795
| 3,541,547,985
|
PR_kwDOCUB6oc6vGuR8
| 41,795
|
Fix MXFP4 quantizer to support variable num_local_experts and hidden_size
|
{
"login": "marksverdhei",
"id": 46672778,
"node_id": "MDQ6VXNlcjQ2NjcyNzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/46672778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marksverdhei",
"html_url": "https://github.com/marksverdhei",
"followers_url": "https://api.github.com/users/marksverdhei/followers",
"following_url": "https://api.github.com/users/marksverdhei/following{/other_user}",
"gists_url": "https://api.github.com/users/marksverdhei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marksverdhei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marksverdhei/subscriptions",
"organizations_url": "https://api.github.com/users/marksverdhei/orgs",
"repos_url": "https://api.github.com/users/marksverdhei/repos",
"events_url": "https://api.github.com/users/marksverdhei/events{/privacy}",
"received_events_url": "https://api.github.com/users/marksverdhei/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T16:29:55
| 2025-10-24T12:18:53
| 2025-10-24T12:18:52
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41795",
"html_url": "https://github.com/huggingface/transformers/pull/41795",
"diff_url": "https://github.com/huggingface/transformers/pull/41795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41795.patch",
"merged_at": "2025-10-24T12:18:52"
}
|
# What does this PR do?
This PR replaces hardcoded values `num_local_experts` and `hidden_size` in `MXFP4Config` for `GPT-OSS` type models.
I discovered this when experimenting with non-standard configs of GPT-OSS architecture but i'm pretty sure it'll break for openai/gpt-oss-120b as well since it's number of experts is different from the hardcoded value.
The quantizer hardcoded 32 experts and 2880 hidden_size in the reshape operations. This caused failures when quantizing models with different numbers of experts.
Changes:
- Read num_local_experts and hidden_size from model.config
- Use dynamic values in reshape operations instead of hardcoded constants
- Defaults to 32 and 2880 for backward compatibility
This enables quantizing averaged/merged MoE models with fewer experts.
Passed all tests that I was able to run locally on 24gb of vram.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - no
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section? - yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. - I looked and didn't find an issue
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - likely not necessary
- [x] Did you write any new necessary tests? - no, unsure if needed
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41795/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41795/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41794
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41794/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41794/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41794/events
|
https://github.com/huggingface/transformers/pull/41794
| 3,541,310,894
|
PR_kwDOCUB6oc6vF6T0
| 41,794
|
Enable flake8-pie rules
|
{
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T15:18:11
| 2025-10-23T12:18:50
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41794",
"html_url": "https://github.com/huggingface/transformers/pull/41794",
"diff_url": "https://github.com/huggingface/transformers/pull/41794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41794.patch",
"merged_at": null
}
|
# What does this PR do?
This PR enables all [flake8-pie](https://docs.astral.sh/ruff/rules/#flake8-pie-pie) rules in ruff. These rules are:
```
PIE790 Unnecessary pass statement
PIE794 Class field {name} is defined multiple times
PIE796 Enum contains duplicate value: {value}
PIE800 Unnecessary spread **
PIE804 Unnecessary dict kwargs
PIE807 Prefer {container} over useless lambda
PIE808 Unnecessary start argument in range
PIE810 Call {attr} once with a tuple
```
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41794/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41793
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41793/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41793/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41793/events
|
https://github.com/huggingface/transformers/pull/41793
| 3,541,201,866
|
PR_kwDOCUB6oc6vFiXC
| 41,793
|
Fix chat schema tests
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T14:49:21
| 2025-10-22T15:00:51
| 2025-10-22T15:00:50
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41793",
"html_url": "https://github.com/huggingface/transformers/pull/41793",
"diff_url": "https://github.com/huggingface/transformers/pull/41793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41793.patch",
"merged_at": "2025-10-22T15:00:50"
}
|
My bad - the [chat schema PR](https://github.com/huggingface/transformers/pull/40894) added some tests that were failing after my final commits, and I didn't realize because they weren't running in the CI for the PR.
The main problem was that I dropped support for `"type": "any"`, which is no longer part of the JSON schema standard, but left it in some tests, which caused errors. This PR changes the code to read a missing `type` annotation as implicitly allowing any type, which is compatible with the standard. It also removes a test for Processor schema save/loading, which was dropped from the PR (but will be added soon!)
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41793/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41792
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41792/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41792/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41792/events
|
https://github.com/huggingface/transformers/pull/41792
| 3,540,958,408
|
PR_kwDOCUB6oc6vEtYX
| 41,792
|
Bump AMD docker
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T13:48:15
| 2025-10-23T08:44:23
| 2025-10-23T08:44:20
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41792",
"html_url": "https://github.com/huggingface/transformers/pull/41792",
"diff_url": "https://github.com/huggingface/transformers/pull/41792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41792.patch",
"merged_at": "2025-10-23T08:44:20"
}
|
This PR bumps the base version of the AMD docker to ROCm 7.0, which means it is compatible with MI355 which was not the case before.
To avoid issues with detectron, the install is split from the rest of the pip install, and to support MI355 for flash attn, the gfx950 architecture was added to FA compilation.
I tested the docker for Mi325 and Mi355 on important models and there was no major new failures.
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41792/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41791
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41791/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41791/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41791/events
|
https://github.com/huggingface/transformers/pull/41791
| 3,540,914,446
|
PR_kwDOCUB6oc6vEj65
| 41,791
|
[`Onnx docs`] Remove some traces
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T13:36:30
| 2025-10-25T00:53:58
| 2025-10-23T08:34:25
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41791",
"html_url": "https://github.com/huggingface/transformers/pull/41791",
"diff_url": "https://github.com/huggingface/transformers/pull/41791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41791.patch",
"merged_at": "2025-10-23T08:34:25"
}
|
Other langs docs are failing on main. This looked like the root cause
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41791/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41790
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41790/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41790/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41790/events
|
https://github.com/huggingface/transformers/pull/41790
| 3,540,888,824
|
PR_kwDOCUB6oc6vEeYZ
| 41,790
|
Fix attention mask in mamba layers
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T13:29:41
| 2025-10-22T16:15:38
| 2025-10-22T16:15:38
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41790",
"html_url": "https://github.com/huggingface/transformers/pull/41790",
"diff_url": "https://github.com/huggingface/transformers/pull/41790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41790.patch",
"merged_at": "2025-10-22T16:15:38"
}
|
# What does this PR do?
As per title. It was reported by LFM-VL team that the batched generation is outputting garbage with one of the checkpoints. I found that the masking is not being applied at all for mamba layers
Firstly, mamba layer do not have a 4D attention weight and thus need a normal attention in 2D. Also we do not need to check if the attention has a certain shape, instead we only make sure it is applied in prefill stage
This fixes LFM-VL but Ig all mamba models are affected. I'm still surprised we didn't get issues before and that bigger checkpoints of LFM-VL generated normal text even without proper masking. I'm going to fix other mamba models and tag for review when ready
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41790/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41789
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41789/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41789/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41789/events
|
https://github.com/huggingface/transformers/pull/41789
| 3,540,764,638
|
PR_kwDOCUB6oc6vEDS7
| 41,789
|
Use indices as position_ids in modernebert
|
{
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T12:56:46
| 2025-10-23T08:44:47
| null |
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41789",
"html_url": "https://github.com/huggingface/transformers/pull/41789",
"diff_url": "https://github.com/huggingface/transformers/pull/41789.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41789.patch",
"merged_at": null
}
|
Currently there is an issue in `modernbert` stemming from #39847 : the `rotary_emb` always passes `position_ids` as if it were a tensor, but when the implementation is flash it can be `None`. To avoid the error `None has no attributes .shape` we use `indices` as the `positions_ids` in the rotary embedding.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41789/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41788
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41788/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41788/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41788/events
|
https://github.com/huggingface/transformers/pull/41788
| 3,540,713,784
|
PR_kwDOCUB6oc6vD4M5
| 41,788
|
fix type annotation typo in docstring
|
{
"login": "johntheprime",
"id": 80041901,
"node_id": "MDQ6VXNlcjgwMDQxOTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/80041901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johntheprime",
"html_url": "https://github.com/johntheprime",
"followers_url": "https://api.github.com/users/johntheprime/followers",
"following_url": "https://api.github.com/users/johntheprime/following{/other_user}",
"gists_url": "https://api.github.com/users/johntheprime/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johntheprime/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johntheprime/subscriptions",
"organizations_url": "https://api.github.com/users/johntheprime/orgs",
"repos_url": "https://api.github.com/users/johntheprime/repos",
"events_url": "https://api.github.com/users/johntheprime/events{/privacy}",
"received_events_url": "https://api.github.com/users/johntheprime/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T12:42:50
| 2025-10-22T13:59:02
| 2025-10-22T13:58:19
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41788",
"html_url": "https://github.com/huggingface/transformers/pull/41788",
"diff_url": "https://github.com/huggingface/transformers/pull/41788.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41788.patch",
"merged_at": "2025-10-22T13:58:19"
}
|
# What does this PR do?
This PR fix type annotation typo in docstring.
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41788/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41787
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41787/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41787/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41787/events
|
https://github.com/huggingface/transformers/pull/41787
| 3,540,681,549
|
PR_kwDOCUB6oc6vDxKL
| 41,787
|
Bump hfh prerelease v1.0.0.rc7
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T12:33:58
| 2025-10-22T12:43:14
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41787",
"html_url": "https://github.com/huggingface/transformers/pull/41787",
"diff_url": "https://github.com/huggingface/transformers/pull/41787.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41787.patch",
"merged_at": null
}
|
Might be the last pre-release to test? :tada:
I don't expect anything to break but better safe than sorry :hugs:
(cc @hanouticelina)
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41787/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41787/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41786
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41786/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41786/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41786/events
|
https://github.com/huggingface/transformers/pull/41786
| 3,540,495,094
|
PR_kwDOCUB6oc6vDIv3
| 41,786
|
Added AI text detection example in documentation
|
{
"login": "Abhijais4896",
"id": 126854907,
"node_id": "U_kgDOB4-m-w",
"avatar_url": "https://avatars.githubusercontent.com/u/126854907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abhijais4896",
"html_url": "https://github.com/Abhijais4896",
"followers_url": "https://api.github.com/users/Abhijais4896/followers",
"following_url": "https://api.github.com/users/Abhijais4896/following{/other_user}",
"gists_url": "https://api.github.com/users/Abhijais4896/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abhijais4896/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abhijais4896/subscriptions",
"organizations_url": "https://api.github.com/users/Abhijais4896/orgs",
"repos_url": "https://api.github.com/users/Abhijais4896/repos",
"events_url": "https://api.github.com/users/Abhijais4896/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abhijais4896/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T11:35:30
| 2025-10-22T13:44:28
| 2025-10-22T13:44:28
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41786",
"html_url": "https://github.com/huggingface/transformers/pull/41786",
"diff_url": "https://github.com/huggingface/transformers/pull/41786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41786.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41786/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41785
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41785/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41785/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41785/events
|
https://github.com/huggingface/transformers/pull/41785
| 3,540,448,744
|
PR_kwDOCUB6oc6vC_EY
| 41,785
|
[quantization] Skip Fp8 tests when hardware capability < 8.9
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T11:20:33
| 2025-10-22T11:33:31
| 2025-10-22T11:33:29
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41785",
"html_url": "https://github.com/huggingface/transformers/pull/41785",
"diff_url": "https://github.com/huggingface/transformers/pull/41785.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41785.patch",
"merged_at": "2025-10-22T11:33:29"
}
|
# What does this PR do?
Skips FP8 tests when hardware capability is < 8.9 (nvidia 4090)
It's erroring in the CI with gpus 8.6 : https://github.com/huggingface/transformers/actions/runs/18703774699/job/53337972742
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41785/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41784
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41784/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41784/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41784/events
|
https://github.com/huggingface/transformers/pull/41784
| 3,540,397,379
|
PR_kwDOCUB6oc6vCz51
| 41,784
|
4.1V Model and GLM-4.5V Model Conversion Code Updates
|
{
"login": "zRzRzRzRzRzRzR",
"id": 93239683,
"node_id": "U_kgDOBY65gw",
"avatar_url": "https://avatars.githubusercontent.com/u/93239683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zRzRzRzRzRzRzR",
"html_url": "https://github.com/zRzRzRzRzRzRzR",
"followers_url": "https://api.github.com/users/zRzRzRzRzRzRzR/followers",
"following_url": "https://api.github.com/users/zRzRzRzRzRzRzR/following{/other_user}",
"gists_url": "https://api.github.com/users/zRzRzRzRzRzRzR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zRzRzRzRzRzRzR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zRzRzRzRzRzRzR/subscriptions",
"organizations_url": "https://api.github.com/users/zRzRzRzRzRzRzR/orgs",
"repos_url": "https://api.github.com/users/zRzRzRzRzRzRzR/repos",
"events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/events{/privacy}",
"received_events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T11:04:37
| 2025-10-29T10:40:49
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41784",
"html_url": "https://github.com/huggingface/transformers/pull/41784",
"diff_url": "https://github.com/huggingface/transformers/pull/41784.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41784.patch",
"merged_at": null
}
|
+ Fixed weight conversion issues for some model providers and removed some debug logs
+ Simplified some functions
|
{
"login": "zRzRzRzRzRzRzR",
"id": 93239683,
"node_id": "U_kgDOBY65gw",
"avatar_url": "https://avatars.githubusercontent.com/u/93239683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zRzRzRzRzRzRzR",
"html_url": "https://github.com/zRzRzRzRzRzRzR",
"followers_url": "https://api.github.com/users/zRzRzRzRzRzRzR/followers",
"following_url": "https://api.github.com/users/zRzRzRzRzRzRzR/following{/other_user}",
"gists_url": "https://api.github.com/users/zRzRzRzRzRzRzR/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zRzRzRzRzRzRzR/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zRzRzRzRzRzRzR/subscriptions",
"organizations_url": "https://api.github.com/users/zRzRzRzRzRzRzR/orgs",
"repos_url": "https://api.github.com/users/zRzRzRzRzRzRzR/repos",
"events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/events{/privacy}",
"received_events_url": "https://api.github.com/users/zRzRzRzRzRzRzR/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41784/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41783
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41783/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41783/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41783/events
|
https://github.com/huggingface/transformers/pull/41783
| 3,540,250,509
|
PR_kwDOCUB6oc6vCTrO
| 41,783
|
[`Gemma3n`] Fix regression in test
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T10:30:00
| 2025-10-25T00:53:53
| 2025-10-22T10:39:31
|
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41783",
"html_url": "https://github.com/huggingface/transformers/pull/41783",
"diff_url": "https://github.com/huggingface/transformers/pull/41783.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41783.patch",
"merged_at": null
}
|
Draft for now checking things
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41783/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41782
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41782/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41782/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41782/events
|
https://github.com/huggingface/transformers/issues/41782
| 3,540,188,385
|
I_kwDOCUB6oc7TAvzh
| 41,782
|
Local pretrained models cannot be loaded with multithreading
|
{
"login": "Crissium",
"id": 91039086,
"node_id": "MDQ6VXNlcjkxMDM5MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/91039086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crissium",
"html_url": "https://github.com/Crissium",
"followers_url": "https://api.github.com/users/Crissium/followers",
"following_url": "https://api.github.com/users/Crissium/following{/other_user}",
"gists_url": "https://api.github.com/users/Crissium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crissium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crissium/subscriptions",
"organizations_url": "https://api.github.com/users/Crissium/orgs",
"repos_url": "https://api.github.com/users/Crissium/repos",
"events_url": "https://api.github.com/users/Crissium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crissium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T10:15:57
| 2025-10-22T23:11:52
| 2025-10-22T23:11:52
|
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.1
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.39
- Python version: 3.12.11
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.8.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: Parallel
- Using GPU in script?: Yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
@Cyrilvallez
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Simplified script to reproduce:
```python
import random
import torch
import traceback
import transformers
from concurrent.futures import ThreadPoolExecutor
model_class = transformers.T5ForConditionalGeneration
def load_model() -> None:
device_index = random.randint(0, torch.cuda.device_count() - 1)
device = torch.device(f'cuda:{device_index}')
model = model_class.from_pretrained('google-t5/t5-small')
print(f'Loaded model to {model.device}')
try:
model.to(device)
print(f'Moved model to {device}')
except:
print(f'Failed to move model to {device}')
traceback.print_exc()
if __name__ == '__main__':
with ThreadPoolExecutor(max_workers=torch.cuda.device_count()) as executor:
futures = [executor.submit(load_model) for _ in range(10)]
for future in futures:
future.result()
```
Here I use a locally cached Hub model, but in my actual usage, each threads loads a different checkpoint stored locally. Anyway, a no-data error will be raised:
```
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at google-t5/t5-small and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at google-t5/t5-small and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at google-t5/t5-small and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loaded model to meta
Loaded model to meta
Loaded model to meta
Failed to move model to cuda:2
Failed to move model to cuda:6
Traceback (most recent call last):
File "/mnt/users_home/cpii.local/yxing/Workspace/ASR/Study/hf_multithreaded_loading.py", line 17, in load_model
model.to(device)
File "/mnt/users_home/cpii.local/yxing/miniconda3/envs/g/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4459, in to
return super().to(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/users_home/cpii.local/yxing/miniconda3/envs/g/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1355, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/mnt/users_home/cpii.local/yxing/miniconda3/envs/g/lib/python3.12/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/mnt/users_home/cpii.local/yxing/miniconda3/envs/g/lib/python3.12/site-packages/torch/nn/modules/module.py", line 942, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/mnt/users_home/cpii.local/yxing/miniconda3/envs/g/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1348, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
There's a similar issue: #40357, but I haven't tried the older versions.
### Expected behavior
Pretrained models are loaded successfully with multithreading.
Edit: Just a quick note to anyone who runs into this: using multiprocessing is perfectly fine for me.
|
{
"login": "Crissium",
"id": 91039086,
"node_id": "MDQ6VXNlcjkxMDM5MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/91039086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crissium",
"html_url": "https://github.com/Crissium",
"followers_url": "https://api.github.com/users/Crissium/followers",
"following_url": "https://api.github.com/users/Crissium/following{/other_user}",
"gists_url": "https://api.github.com/users/Crissium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crissium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crissium/subscriptions",
"organizations_url": "https://api.github.com/users/Crissium/orgs",
"repos_url": "https://api.github.com/users/Crissium/repos",
"events_url": "https://api.github.com/users/Crissium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crissium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41782/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41781
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41781/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41781/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41781/events
|
https://github.com/huggingface/transformers/pull/41781
| 3,540,144,434
|
PR_kwDOCUB6oc6vB8YZ
| 41,781
|
flash attn pytest marker
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T10:05:56
| 2025-10-23T08:39:21
| 2025-10-23T08:39:20
|
COLLABORATOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41781",
"html_url": "https://github.com/huggingface/transformers/pull/41781",
"diff_url": "https://github.com/huggingface/transformers/pull/41781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41781.patch",
"merged_at": "2025-10-23T08:39:20"
}
|
# What does this PR do?
A first step toward making flash attn test again 🔥
@vasqu Do we really need to have both
@pytest.mark.flash_attn_test
@pytest.mark.flash_attn_3_test
or we can have only `@pytest.mark.flash_attn_test` which cover both?
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41781/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41781/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41780
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41780/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41780/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41780/events
|
https://github.com/huggingface/transformers/pull/41780
| 3,540,102,224
|
PR_kwDOCUB6oc6vBy8r
| 41,780
|
[quantization] fix compressed_tensors tests
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T09:55:51
| 2025-10-22T10:37:08
| 2025-10-22T10:37:07
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41780",
"html_url": "https://github.com/huggingface/transformers/pull/41780",
"diff_url": "https://github.com/huggingface/transformers/pull/41780.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41780.patch",
"merged_at": "2025-10-22T10:37:07"
}
|
# What does this PR do?
Fixing compressed_tensors related tests. Failling tests are passing locally
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41780/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41779
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41779/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41779/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41779/events
|
https://github.com/huggingface/transformers/issues/41779
| 3,540,006,552
|
I_kwDOCUB6oc7TADaY
| 41,779
|
Broken documentation link in Transformers website
|
{
"login": "ruheena-shaik",
"id": 214262947,
"node_id": "U_kgDODMVkow",
"avatar_url": "https://avatars.githubusercontent.com/u/214262947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ruheena-shaik",
"html_url": "https://github.com/ruheena-shaik",
"followers_url": "https://api.github.com/users/ruheena-shaik/followers",
"following_url": "https://api.github.com/users/ruheena-shaik/following{/other_user}",
"gists_url": "https://api.github.com/users/ruheena-shaik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ruheena-shaik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruheena-shaik/subscriptions",
"organizations_url": "https://api.github.com/users/ruheena-shaik/orgs",
"repos_url": "https://api.github.com/users/ruheena-shaik/repos",
"events_url": "https://api.github.com/users/ruheena-shaik/events{/privacy}",
"received_events_url": "https://api.github.com/users/ruheena-shaik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T09:34:39
| 2025-10-22T10:05:49
| null |
NONE
| null | null | null | null |
A documentation link on the Transformers site redirects to a 404 page. It should point to the latest version of the model guide. Please verify and update the correct URL in the docs section.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41779/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41778
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41778/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41778/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41778/events
|
https://github.com/huggingface/transformers/pull/41778
| 3,539,976,805
|
PR_kwDOCUB6oc6vBXHe
| 41,778
|
Fix Qwen3-Omni RoPE
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T09:28:17
| 2025-10-23T07:56:14
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41778",
"html_url": "https://github.com/huggingface/transformers/pull/41778",
"diff_url": "https://github.com/huggingface/transformers/pull/41778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41778.patch",
"merged_at": null
}
|
# What does this PR do?
As per title, after the last PR with RoPE refactoring, Qwen3-Omni model has issues when loading the model. One of the many sub-configs doesn't call standardization on RoPE which causes issues
I also updated slow tests with correct checkpoint, right now they use Omni-2 checkpoints and thus do not test anything
cc @BakerBunker
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41778/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41777
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41777/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41777/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41777/events
|
https://github.com/huggingface/transformers/pull/41777
| 3,539,783,054
|
PR_kwDOCUB6oc6vAwUO
| 41,777
|
[quantization] fix torchao tests after 0.14.0 release
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-22T08:50:22
| 2025-10-23T08:26:45
| 2025-10-23T08:26:45
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41777",
"html_url": "https://github.com/huggingface/transformers/pull/41777",
"diff_url": "https://github.com/huggingface/transformers/pull/41777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41777.patch",
"merged_at": "2025-10-23T08:26:44"
}
|
# What does this PR do?
Fixes the deprecation of `int4_weight_only` deprecation in torchao>=0.14.0 https://github.com/pytorch/ao/pull/2994
|
{
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41777/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41776
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41776/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41776/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41776/events
|
https://github.com/huggingface/transformers/pull/41776
| 3,539,574,065
|
PR_kwDOCUB6oc6vAJkv
| 41,776
|
Add safety checking infrastructure for text generation
|
{
"login": "rice-e",
"id": 111106282,
"node_id": "U_kgDOBp9Y6g",
"avatar_url": "https://avatars.githubusercontent.com/u/111106282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rice-e",
"html_url": "https://github.com/rice-e",
"followers_url": "https://api.github.com/users/rice-e/followers",
"following_url": "https://api.github.com/users/rice-e/following{/other_user}",
"gists_url": "https://api.github.com/users/rice-e/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rice-e/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rice-e/subscriptions",
"organizations_url": "https://api.github.com/users/rice-e/orgs",
"repos_url": "https://api.github.com/users/rice-e/repos",
"events_url": "https://api.github.com/users/rice-e/events{/privacy}",
"received_events_url": "https://api.github.com/users/rice-e/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T08:03:16
| 2025-10-22T08:03:16
| null |
NONE
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41776",
"html_url": "https://github.com/huggingface/transformers/pull/41776",
"diff_url": "https://github.com/huggingface/transformers/pull/41776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41776.patch",
"merged_at": null
}
|
**Draft PR:** Core implementation complete. Seeking feedback on design and approach before finalizing. Thank you!
# What does this PR do?
Adds safety checking infrastructure for text generation. Provides base classes, configuration, and processors that integrate with the generation pipeline. Users implement their own safety checkers for specific needs (harm, bias, PII, etc.).
Fixes #41740
# Motivation
As stated in the issue I opened, while production LLMs have built-in safety moderation systems, they are often insufficient and can lead to unexpected harmful behavior, especially over long conversations. As open-source text generation models become more capable and widely used, mitigating harm and ensuring user safety is a feature that should be built in. As far as I am aware, there is currently no built-in infrastructure to support this. The most effective approaches involve moderation during inference, which is a non-trivial feature for Transformers users to implement on their own. In addition, allowing for the configuration of safety with custom settings and classifiers can allow users to avoid harm in more specialized contexts than commercial LLMs currently address.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Currently being discussed in #41740
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41776/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41775
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41775/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41775/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41775/events
|
https://github.com/huggingface/transformers/issues/41775
| 3,539,495,777
|
I_kwDOCUB6oc7S-Gth
| 41,775
|
Hugging Face website and models not reachable
|
{
"login": "christian-rauch",
"id": 8226248,
"node_id": "MDQ6VXNlcjgyMjYyNDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8226248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/christian-rauch",
"html_url": "https://github.com/christian-rauch",
"followers_url": "https://api.github.com/users/christian-rauch/followers",
"following_url": "https://api.github.com/users/christian-rauch/following{/other_user}",
"gists_url": "https://api.github.com/users/christian-rauch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/christian-rauch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/christian-rauch/subscriptions",
"organizations_url": "https://api.github.com/users/christian-rauch/orgs",
"repos_url": "https://api.github.com/users/christian-rauch/repos",
"events_url": "https://api.github.com/users/christian-rauch/events{/privacy}",
"received_events_url": "https://api.github.com/users/christian-rauch/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-22T07:40:32
| 2025-10-23T15:43:23
| null |
NONE
| null | null | null | null |
### System Info
```
$ pip show transformers
Name: transformers
Version: 4.57.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: transformers@huggingface.co
```
```
$ python --version
Python 3.12.3
```
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. `python -c 'from transformers import pipeline; pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")'`
I am getting connection issues:
```
OSError: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files.
Check your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
```
It rather funny that it recommends checking https://huggingface.co/docs/transformers/installation#offline-mode when https://huggingface.co is not reachable :-) Maybe this information, e.g. about mirrors, could be hosted somewhere else?
### Expected behavior
The examples should work as documented.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41775/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41774
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41774/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41774/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41774/events
|
https://github.com/huggingface/transformers/pull/41774
| 3,539,270,577
|
PR_kwDOCUB6oc6u_KZF
| 41,774
|
[WIP] Add: `ModernVBert`
|
{
"login": "paultltc",
"id": 73120933,
"node_id": "MDQ6VXNlcjczMTIwOTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/73120933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paultltc",
"html_url": "https://github.com/paultltc",
"followers_url": "https://api.github.com/users/paultltc/followers",
"following_url": "https://api.github.com/users/paultltc/following{/other_user}",
"gists_url": "https://api.github.com/users/paultltc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paultltc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paultltc/subscriptions",
"organizations_url": "https://api.github.com/users/paultltc/orgs",
"repos_url": "https://api.github.com/users/paultltc/repos",
"events_url": "https://api.github.com/users/paultltc/events{/privacy}",
"received_events_url": "https://api.github.com/users/paultltc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-22T06:15:23
| 2025-10-22T06:18:05
| null |
NONE
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41774",
"html_url": "https://github.com/huggingface/transformers/pull/41774",
"diff_url": "https://github.com/huggingface/transformers/pull/41774.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41774.patch",
"merged_at": null
}
|
# What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Multimodal Model Addition Checklist
Please ensure your PR completes all following items. See the [full checklist](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#vision-language-model-contribution-checklist) for details.
- [ ] **Modular file**: `modular_<model_name>.py` implemented and verified with `python utils/modular_model_converter.py <model_name>`
- [ ] **Fast image processor**: Implemented using `BaseImageProcessorFast` (see [#36978](https://github.com/huggingface/transformers/issues/36978))
- [ ] **Conversion script**: `convert_<model_name>_to_hf.py` added with usage examples
- [ ] **Integration tests**: End-to-end tests with exact output matching (text or logits)
- [ ] **Documentation**: Model docs added/updated in `docs/source/en/model_doc/`
- [ ] **Pattern reuse**: Verified against similar models (LLaVA, Idefics2, etc.)
- [ ] **Quality checks**: `make fixup` passes with no errors
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@yonigozlan (still draft PR as need to be tested, I will notify you when it's ready! 🤗 )
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41774/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41773
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41773/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41773/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41773/events
|
https://github.com/huggingface/transformers/pull/41773
| 3,538,394,560
|
PR_kwDOCUB6oc6u8QHK
| 41,773
|
Simplify and standardize processor tests
|
{
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-21T22:33:49
| 2025-10-23T13:40:22
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41773",
"html_url": "https://github.com/huggingface/transformers/pull/41773",
"diff_url": "https://github.com/huggingface/transformers/pull/41773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41773.patch",
"merged_at": null
}
|
# What does this PR do?
Improve ProcessorTestMixin to standardize processor tests, especially the setup part.
Requires https://github.com/huggingface/transformers/pull/41633
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41773/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41772
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41772/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41772/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41772/events
|
https://github.com/huggingface/transformers/pull/41772
| 3,538,308,688
|
PR_kwDOCUB6oc6u79pn
| 41,772
|
Add safety checking infrastructure for text generation
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 9258341780,
"node_id": "LA_kwDOCUB6oc8AAAACJ9cVlA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Code%20agent%20slop",
"name": "Code agent slop",
"color": "C59579",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T21:59:01
| 2025-10-22T13:40:43
| 2025-10-22T13:40:37
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41772",
"html_url": "https://github.com/huggingface/transformers/pull/41772",
"diff_url": "https://github.com/huggingface/transformers/pull/41772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41772.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds a comprehensive safety checking infrastructure for text generation, enabling built-in content moderation similar to production LLMs (ChatGPT, Gemini, Claude).
## Issue
Open-source text generation models lack built-in safety moderation infrastructure. While production LLMs have safety systems, implementing effective moderation during inference is non-trivial for Transformers users. There's currently no native infrastructure to support this.
Issue: #41740
## Solution
This PR implements:
- **Abstract SafetyChecker base class** for pluggable safety implementations
- **KeywordSafetyChecker** reference implementation with keyword/pattern blocking
- **SafetyStoppingCriteria** to halt generation on safety violations
- **SafetyLogitsProcessor** to filter unsafe token continuations
- **GenerationConfig integration** with automatic processor/criteria instantiation
- **Comprehensive safety parameters** (check frequency, penalty values, violation handling)
## Key Features
- ✅ Automatic integration - just configure GenerationConfig, no manual setup
- ✅ String-based or instance-based safety checker specification
- ✅ Configurable modes (stop on violation, filter violations, or both)
- ✅ Performance tuning via check frequency
- ✅ Extensible architecture for custom safety checkers
- ✅ Batch processing support
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
# Automatic safety integration
config = GenerationConfig(
safety_checker="keyword",
safety_checker_kwargs={"blocked_keywords": ["unsafe", "explicit"]},
safety_stop_on_violation=True,
safety_filter_violations=True,
)
outputs = model.generate(**inputs, generation_config=config, tokenizer=tokenizer)
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41772/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41771
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41771/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41771/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41771/events
|
https://github.com/huggingface/transformers/pull/41771
| 3,538,302,393
|
PR_kwDOCUB6oc6u78Nv
| 41,771
|
fix: improve AutoTokenizer error message for missing vocab files (#41…
|
{
"login": "cwarre33",
"id": 169564666,
"node_id": "U_kgDOChtZ-g",
"avatar_url": "https://avatars.githubusercontent.com/u/169564666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cwarre33",
"html_url": "https://github.com/cwarre33",
"followers_url": "https://api.github.com/users/cwarre33/followers",
"following_url": "https://api.github.com/users/cwarre33/following{/other_user}",
"gists_url": "https://api.github.com/users/cwarre33/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cwarre33/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwarre33/subscriptions",
"organizations_url": "https://api.github.com/users/cwarre33/orgs",
"repos_url": "https://api.github.com/users/cwarre33/repos",
"events_url": "https://api.github.com/users/cwarre33/events{/privacy}",
"received_events_url": "https://api.github.com/users/cwarre33/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T21:57:14
| 2025-10-29T12:21:35
| 2025-10-29T12:21:35
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41771",
"html_url": "https://github.com/huggingface/transformers/pull/41771",
"diff_url": "https://github.com/huggingface/transformers/pull/41771.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41771.patch",
"merged_at": null
}
|
…553)
Add clear error message when LlamaTokenizer fails to load because vocab_file is None, which typically happens when optional dependencies like mistral-common are not installed.
This provides users with:
- Clear explanation of why the error occurred
- Specific guidance for installing mistral-common for Voxtral/Mistral models
- General troubleshooting advice for other models
Before: Users would see a cryptic 'TypeError: not a string' from sentencepiece
After: Clear ValueError with actionable guidance
Fixes #41553
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41771/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41770
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41770/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41770/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41770/events
|
https://github.com/huggingface/transformers/pull/41770
| 3,538,234,341
|
PR_kwDOCUB6oc6u7tUh
| 41,770
|
Add safety infrastructure
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T21:30:14
| 2025-10-21T21:42:07
| 2025-10-21T21:42:06
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41770",
"html_url": "https://github.com/huggingface/transformers/pull/41770",
"diff_url": "https://github.com/huggingface/transformers/pull/41770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41770.patch",
"merged_at": null
}
|
# What does this PR do?
This PR adds a comprehensive safety checking infrastructure for text generation, enabling built-in content moderation similar to production LLMs (ChatGPT, Gemini, Claude).
## Issue
Open-source text generation models lack built-in safety moderation infrastructure. While production LLMs have safety systems, implementing effective moderation during inference is non-trivial for Transformers users. There's currently no native infrastructure to support this.
## Solution
This PR implements:
- **Abstract SafetyChecker base class** for pluggable safety implementations
- **KeywordSafetyChecker** reference implementation with keyword/pattern blocking
- **SafetyStoppingCriteria** to halt generation on safety violations
- **SafetyLogitsProcessor** to filter unsafe token continuations
- **GenerationConfig integration** with automatic processor/criteria instantiation
- **Comprehensive safety parameters** (check frequency, penalty values, violation handling)
## Key Features
- ✅ Automatic integration - just configure GenerationConfig, no manual setup
- ✅ String-based or instance-based safety checker specification
- ✅ Configurable modes (stop on violation, filter violations, or both)
- ✅ Performance tuning via check frequency
- ✅ Extensible architecture for custom safety checkers
- ✅ Batch processing support
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
# Automatic safety integration
config = GenerationConfig(
safety_checker="keyword",
safety_checker_kwargs={"blocked_keywords": ["unsafe", "explicit"]},
safety_stop_on_violation=True,
safety_filter_violations=True,
)
outputs = model.generate(**inputs, generation_config=config, tokenizer=tokenizer)
Testing
25 comprehensive tests covering all components
Integration tests verify automatic GenerationMixin integration
Tests cover keyword checking, custom checkers, stopping criteria, logits processing
Fixes #41740
Before submitting
This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - Yes, #41740
Did you make sure to update the documentation with your changes?
Did you write any new necessary tests? - Yes, 25 tests
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41770/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41769
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41769/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41769/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41769/events
|
https://github.com/huggingface/transformers/pull/41769
| 3,537,443,028
|
PR_kwDOCUB6oc6u4_VI
| 41,769
|
Fix: handled index_error in set_zero3_state
|
{
"login": "Aaraviitkgp",
"id": 196036487,
"node_id": "U_kgDOC69Hhw",
"avatar_url": "https://avatars.githubusercontent.com/u/196036487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aaraviitkgp",
"html_url": "https://github.com/Aaraviitkgp",
"followers_url": "https://api.github.com/users/Aaraviitkgp/followers",
"following_url": "https://api.github.com/users/Aaraviitkgp/following{/other_user}",
"gists_url": "https://api.github.com/users/Aaraviitkgp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aaraviitkgp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aaraviitkgp/subscriptions",
"organizations_url": "https://api.github.com/users/Aaraviitkgp/orgs",
"repos_url": "https://api.github.com/users/Aaraviitkgp/repos",
"events_url": "https://api.github.com/users/Aaraviitkgp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aaraviitkgp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T17:06:22
| 2025-10-22T16:22:02
| 2025-10-22T16:22:02
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41769",
"html_url": "https://github.com/huggingface/transformers/pull/41769",
"diff_url": "https://github.com/huggingface/transformers/pull/41769.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41769.patch",
"merged_at": null
}
|
Error #41762
This PR fixes a potential `IndexError` and inconsistent initialization behavior that can occur during DeepSpeed initialization (`set_zero3_state`) and model weight setup (`initialize_weights`).
Added conditional statement to resolve issue.
@3outeille @ArthurZucker
|
{
"login": "Aaraviitkgp",
"id": 196036487,
"node_id": "U_kgDOC69Hhw",
"avatar_url": "https://avatars.githubusercontent.com/u/196036487?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aaraviitkgp",
"html_url": "https://github.com/Aaraviitkgp",
"followers_url": "https://api.github.com/users/Aaraviitkgp/followers",
"following_url": "https://api.github.com/users/Aaraviitkgp/following{/other_user}",
"gists_url": "https://api.github.com/users/Aaraviitkgp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aaraviitkgp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aaraviitkgp/subscriptions",
"organizations_url": "https://api.github.com/users/Aaraviitkgp/orgs",
"repos_url": "https://api.github.com/users/Aaraviitkgp/repos",
"events_url": "https://api.github.com/users/Aaraviitkgp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aaraviitkgp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41769/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41768
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41768/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41768/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41768/events
|
https://github.com/huggingface/transformers/pull/41768
| 3,536,773,950
|
PR_kwDOCUB6oc6u2xE9
| 41,768
|
🚀 Optimize MoE and Mamba performance with vectorized operations
|
{
"login": "faizan842",
"id": 91795555,
"node_id": "U_kgDOBXiwYw",
"avatar_url": "https://avatars.githubusercontent.com/u/91795555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizan842",
"html_url": "https://github.com/faizan842",
"followers_url": "https://api.github.com/users/faizan842/followers",
"following_url": "https://api.github.com/users/faizan842/following{/other_user}",
"gists_url": "https://api.github.com/users/faizan842/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizan842/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizan842/subscriptions",
"organizations_url": "https://api.github.com/users/faizan842/orgs",
"repos_url": "https://api.github.com/users/faizan842/repos",
"events_url": "https://api.github.com/users/faizan842/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizan842/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T14:14:18
| 2025-10-21T14:32:44
| 2025-10-21T14:32:37
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41768",
"html_url": "https://github.com/huggingface/transformers/pull/41768",
"diff_url": "https://github.com/huggingface/transformers/pull/41768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41768.patch",
"merged_at": null
}
|
## Performance Optimization
This PR optimizes two major performance bottlenecks in the transformers library:
### 🔍 Problems Identified
1. **Aria MoE Sequential GEMM**: The sequential_experts_gemm function used inefficient sequential processing for expert computations
2. **Jamba Mamba Slow Forward**: The slow_forward method had a sequential loop in the state space model computation
### ⚡ Solutions Implemented
#### **Aria MoE Optimization**
- **Vectorized Expert Selection**: Use advanced indexing instead of sequential loops
- **Batch Matrix Multiplication**: Use torch.bmm for efficient computation
- **Eliminated Sequential Processing**: Replace O(n) sequential operations with vectorized operations
#### **Jamba Mamba Optimization**
- **Vectorized State Computation**: Pre-allocate output tensors and use vectorized operations
- **Optimized Matrix Operations**: Use torch.bmm for batch matrix multiplication
- **Reduced Memory Allocations**: Eliminate repeated tensor creation in loops
### 📊 Performance Improvements
- **Aria MoE**: Significant speedup for Mixture of Experts models with many experts
- **Jamba Mamba**: Faster state space model computation for long sequences
- **GPU Utilization**: Better utilization of parallel processing capabilities
- **Memory Efficiency**: Reduced memory allocations and better cache usage
### 🎯 Impact
- **All MoE models** benefit from the Aria optimization
- **All Mamba-based models** benefit from the Jamba optimization
- **Long sequences** see the most improvement
- **Backward compatible** - no API changes
- **Production ready** - maintains exact same functionality
### 🧪 Testing
- ✅ All existing tests pass
- ✅ Results are identical to original implementation
- ✅ Vectorized operations tested and verified
- ✅ Both modeling and modular files updated
### 📁 Files Changed
- src/transformers/models/aria/modeling_aria.py - Optimized MoE GEMM
- src/transformers/models/aria/modular_aria.py - Optimized MoE GEMM
- src/transformers/models/jamba/modeling_jamba.py - Optimized Mamba forward
- src/transformers/models/jamba/modular_jamba.py - Optimized Mamba forward
This optimization addresses critical performance bottlenecks in modern transformer architectures and will significantly improve the user experience for MoE and Mamba models.
|
{
"login": "faizan842",
"id": 91795555,
"node_id": "U_kgDOBXiwYw",
"avatar_url": "https://avatars.githubusercontent.com/u/91795555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizan842",
"html_url": "https://github.com/faizan842",
"followers_url": "https://api.github.com/users/faizan842/followers",
"following_url": "https://api.github.com/users/faizan842/following{/other_user}",
"gists_url": "https://api.github.com/users/faizan842/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizan842/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizan842/subscriptions",
"organizations_url": "https://api.github.com/users/faizan842/orgs",
"repos_url": "https://api.github.com/users/faizan842/repos",
"events_url": "https://api.github.com/users/faizan842/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizan842/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41768/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41767
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41767/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41767/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41767/events
|
https://github.com/huggingface/transformers/pull/41767
| 3,536,627,933
|
PR_kwDOCUB6oc6u2RDL
| 41,767
|
fix: Gemma 3 weights conversion vision and multimodal projector paths
|
{
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T13:39:55
| 2025-10-22T09:39:38
| 2025-10-22T09:38:57
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41767",
"html_url": "https://github.com/huggingface/transformers/pull/41767",
"diff_url": "https://github.com/huggingface/transformers/pull/41767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41767.patch",
"merged_at": "2025-10-22T09:38:57"
}
|
# What does this PR do?
Fixes bugs in the Gemma 3 weights conversion script:
* Looking for multi-modal projector weights at the wrong location
* Had the wrong prefix on the vision model weights
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@zucchini-nlp
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41767/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41767/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41766
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41766/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41766/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41766/events
|
https://github.com/huggingface/transformers/pull/41766
| 3,536,431,258
|
PR_kwDOCUB6oc6u1lp3
| 41,766
|
Fix TypeError when loading adapter models with _adapter_model_path
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T12:55:44
| 2025-10-21T13:10:21
| 2025-10-21T13:09:59
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41766",
"html_url": "https://github.com/huggingface/transformers/pull/41766",
"diff_url": "https://github.com/huggingface/transformers/pull/41766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41766.patch",
"merged_at": null
}
|
# What does this PR do?
This PR resolves a TypeError that occurs when loading models with LoRA adapters (like ibm-granite/granite-speech-3.3-2b).
## Issue
When calling `AutoModelForSpeechSeq2Seq.from_pretrained()` on models with LoRA adapters, the following error occurs:
TypeError: find_adapter_config_file() got an unexpected keyword argument '_adapter_model_path'
## Root Cause
In `auto_factory.py` line 302, `_adapter_model_path` is added to `adapter_kwargs`. Later, in `peft.py` line 221, these `adapter_kwargs` are passed to `find_adapter_config_file()` via `**adapter_kwargs`, but `find_adapter_config_file()` doesn't accept `_adapter_model_path` as a parameter.
## Solution
Remove `_adapter_model_path` from `adapter_kwargs` in `peft.py` before passing it to `find_adapter_config_file()`.
## Testing
Verified the fix resolves the TypeError when loading `ibm-granite/granite-speech-3.3-2b`.
Fixes #41760
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@BenjaminBossan @Cyrilvallez @ArthurZucker
|
{
"login": "jameslovespancakes",
"id": 220026352,
"node_id": "U_kgDODR1V8A",
"avatar_url": "https://avatars.githubusercontent.com/u/220026352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameslovespancakes",
"html_url": "https://github.com/jameslovespancakes",
"followers_url": "https://api.github.com/users/jameslovespancakes/followers",
"following_url": "https://api.github.com/users/jameslovespancakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jameslovespancakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameslovespancakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameslovespancakes/subscriptions",
"organizations_url": "https://api.github.com/users/jameslovespancakes/orgs",
"repos_url": "https://api.github.com/users/jameslovespancakes/repos",
"events_url": "https://api.github.com/users/jameslovespancakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameslovespancakes/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41766/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41765
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41765/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41765/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41765/events
|
https://github.com/huggingface/transformers/pull/41765
| 3,536,013,440
|
PR_kwDOCUB6oc6u0K68
| 41,765
|
[kernels] Add Tests & CI for kernels
|
{
"login": "MekkCyber",
"id": 93391238,
"node_id": "U_kgDOBZEJhg",
"avatar_url": "https://avatars.githubusercontent.com/u/93391238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MekkCyber",
"html_url": "https://github.com/MekkCyber",
"followers_url": "https://api.github.com/users/MekkCyber/followers",
"following_url": "https://api.github.com/users/MekkCyber/following{/other_user}",
"gists_url": "https://api.github.com/users/MekkCyber/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MekkCyber/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MekkCyber/subscriptions",
"organizations_url": "https://api.github.com/users/MekkCyber/orgs",
"repos_url": "https://api.github.com/users/MekkCyber/repos",
"events_url": "https://api.github.com/users/MekkCyber/events{/privacy}",
"received_events_url": "https://api.github.com/users/MekkCyber/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-21T11:00:27
| 2025-10-23T08:39:04
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41765",
"html_url": "https://github.com/huggingface/transformers/pull/41765",
"diff_url": "https://github.com/huggingface/transformers/pull/41765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41765.patch",
"merged_at": null
}
|
# What does this PR do?
Adds tests for kernels, and proper daily CI, and slack notifications
run example : https://github.com/huggingface/transformers/actions/runs/18688016017/job/53285883834
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41765/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41764
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41764/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41764/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41764/events
|
https://github.com/huggingface/transformers/pull/41764
| 3,536,005,142
|
PR_kwDOCUB6oc6u0JLy
| 41,764
|
Fix logic error in `prepare_inputs_for_generation` cache slicing condition
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-21T10:57:12
| 2025-10-23T06:20:11
| null |
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41764",
"html_url": "https://github.com/huggingface/transformers/pull/41764",
"diff_url": "https://github.com/huggingface/transformers/pull/41764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41764.patch",
"merged_at": null
}
|
Fix logic error in `prepare_inputs_for_generation` cache slicing condition:
> TypeError: 'NoneType' object is not subscriptable
### Background
I think the PR
- #41505
introduced a logic error where the condition for calling `_cache_dependant_input_preparation` uses `is None` instead of `is not None`, causing crashes when `prepare_inputs_for_generation` is called with `past_key_values=None` and `use_cache=False`.
### Bug
PR #41505 introduced this condition:
```python
if past_key_values is None or use_cache:
inputs_embeds, input_ids = self._cache_dependant_input_preparation(...)
```
The condition `past_key_values is None or use_cache` means:
- Do cache-dependent slicing if we DON'T have a cache OR if caching is enabled
This triggers the function even when:
- past_key_values=None (no cache)
- use_cache=False (caching disabled)
- cache_position=None (no position information)
This combination is invalid for cache-dependent preparation and causes a crash when accessing `cache_position[-1]` (line 456).
Note that during normal generation, it works fine because `use_cache=True`, making the buggy `past_key_values is None` part irrelevant.
### Fix
This PR changes the condition to:
```diff
- if past_key_values is None or use_cache:
+ if past_key_values is not None or use_cache:
```
The condition `past_key_values is not None or use_cache` means:
- Do cache-dependent slicing if we DO have a cache OR if caching is enabled
This is semantically correct and matches the intent described in the PR #41505 comment: https://github.com/huggingface/transformers/pull/41505#discussion_r2426080715
> stateful models like `recurrent_gemma` assume that slicing happens, but don't have a Cache
The `use_cache` part handles stateful models, while `past_key_values is not None` handles normal cached models.
### Testing
This PR fixes the downstream failing test in TRL:
> tests/test_modeling_geometric_mixture_wrapper.py::TestGeometricMixtureWrapper::test_prepare_inputs_for_generation
See the associated issue:
- https://github.com/huggingface/trl/issues/4272
### Related
- This PR addresses a logic error introduced by:
- #41505
- This PR will fix https://github.com/huggingface/trl/issues/4272
CC:
- @gante, who made the PR #41505
- @zucchini-nlp , who reviewed the PR #41505; see https://github.com/huggingface/transformers/pull/41505#discussion_r2425632292
> maybe i am missing smth, do we apply cache slicing when the `past_key_values is None`?
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41764/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41763
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41763/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41763/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41763/events
|
https://github.com/huggingface/transformers/pull/41763
| 3,535,969,941
|
PR_kwDOCUB6oc6u0Bxy
| 41,763
|
Timesfm 2.5
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-21T10:43:25
| 2025-10-21T11:41:45
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41763",
"html_url": "https://github.com/huggingface/transformers/pull/41763",
"diff_url": "https://github.com/huggingface/transformers/pull/41763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41763.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41763/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41762
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41762/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41762/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41762/events
|
https://github.com/huggingface/transformers/issues/41762
| 3,535,832,788
|
I_kwDOCUB6oc7SwIbU
| 41,762
|
`IndexError: index 0 is out of bounds for dimension 0 with size 0` when loading Gemma3ForConditionalGeneration with DeepSpeed ZeRO-3
|
{
"login": "Asunatan",
"id": 105210894,
"node_id": "U_kgDOBkVkDg",
"avatar_url": "https://avatars.githubusercontent.com/u/105210894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Asunatan",
"html_url": "https://github.com/Asunatan",
"followers_url": "https://api.github.com/users/Asunatan/followers",
"following_url": "https://api.github.com/users/Asunatan/following{/other_user}",
"gists_url": "https://api.github.com/users/Asunatan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Asunatan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Asunatan/subscriptions",
"organizations_url": "https://api.github.com/users/Asunatan/orgs",
"repos_url": "https://api.github.com/users/Asunatan/repos",
"events_url": "https://api.github.com/users/Asunatan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Asunatan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T09:58:58
| 2025-10-22T15:10:46
| 2025-10-22T15:10:46
|
NONE
| null | null | null | null |
### System Info
transformers=4.57.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
model = Gemma3ForConditionalGeneration.from_pretrained(model_args.model_local_path,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
# device_map='cuda:3',
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
### Expected behavior
When I try to **pre-train / fine-tune** `Gemma3ForConditionalGeneration` with **DeepSpeed ZeRO-3** , the job crashes **immediately after the model is initialized** with the following traceback:
[2025-10-21 09:31:12,879] [INFO] [real_accelerator.py:254:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-10-21 09:31:14,455] [INFO] [config.py:744:__init__] Config mesh_device None world_size = 1
[2025-10-21 09:31:14,456] [INFO] [comm.py:675:init_distributed] cdb=None
[2025-10-21 09:31:14,456] [INFO] [comm.py:690:init_distributed] Not using the DeepSpeed or dist launchers, attempting to detect MPI environment...
[2025-10-21 09:31:15,197] [INFO] [comm.py:745:mpi_discovery] Discovered MPI settings of world_rank=0, local_rank=0, world_size=1, master_addr=10.169.115.149, master_port=29500
[2025-10-21 09:31:15,197] [INFO] [comm.py:706:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2025-10-21 09:31:16,938] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 884, num_elems = 4.97B
[rank0]: Traceback (most recent call last):
[rank0]: File "/data/scy/SCY/SonoVLM_V2/deepspeed_train.py", line 519, in <module>
[rank0]: train()
[rank0]: File "/data/scy/SCY/SonoVLM_V2/deepspeed_train.py", line 355, in train
[rank0]: model = Gemma3ForConditionalGeneration.from_pretrained(model_args.model_local_path,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 277, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5048, in from_pretrained
[rank0]: ) = cls._load_pretrained_model(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5362, in _load_pretrained_model
[rank0]: model._initialize_missing_keys(missing_keys + mismatched_keys, is_quantized)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5892, in _initialize_missing_keys
[rank0]: self.initialize_weights()
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2984, in initialize_weights
[rank0]: self.smart_apply(self._initialize_weights)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2975, in smart_apply
[rank0]: module.smart_apply(module._initialize_weights)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2975, in smart_apply
[rank0]: module.smart_apply(module._initialize_weights)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2977, in smart_apply
[rank0]: module.smart_apply(fn)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2978, in smart_apply
[rank0]: fn(self)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2952, in _initialize_weights
[rank0]: self._init_weights(module)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 434, in _init_weights
[rank0]: super()._init_weights(module)
[rank0]: File "/data/scy/anaconda3/envs/pytorch_2.7.1/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2929, in _init_weights
[rank0]: module.weight.data[module.padding_idx].zero_()
[rank0]: ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
[rank0]: IndexError: index 0 is out of bounds for dimension 0 with size 0
[rank0]:[W1021 09:31:18.622273376 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
{
"login": "Asunatan",
"id": 105210894,
"node_id": "U_kgDOBkVkDg",
"avatar_url": "https://avatars.githubusercontent.com/u/105210894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Asunatan",
"html_url": "https://github.com/Asunatan",
"followers_url": "https://api.github.com/users/Asunatan/followers",
"following_url": "https://api.github.com/users/Asunatan/following{/other_user}",
"gists_url": "https://api.github.com/users/Asunatan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Asunatan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Asunatan/subscriptions",
"organizations_url": "https://api.github.com/users/Asunatan/orgs",
"repos_url": "https://api.github.com/users/Asunatan/repos",
"events_url": "https://api.github.com/users/Asunatan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Asunatan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41762/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41761
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41761/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41761/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41761/events
|
https://github.com/huggingface/transformers/pull/41761
| 3,535,727,788
|
PR_kwDOCUB6oc6uzOOJ
| 41,761
|
transformers cli default flag fix
|
{
"login": "ArjunPimpale",
"id": 144466352,
"node_id": "U_kgDOCJxhsA",
"avatar_url": "https://avatars.githubusercontent.com/u/144466352?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArjunPimpale",
"html_url": "https://github.com/ArjunPimpale",
"followers_url": "https://api.github.com/users/ArjunPimpale/followers",
"following_url": "https://api.github.com/users/ArjunPimpale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunPimpale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArjunPimpale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunPimpale/subscriptions",
"organizations_url": "https://api.github.com/users/ArjunPimpale/orgs",
"repos_url": "https://api.github.com/users/ArjunPimpale/repos",
"events_url": "https://api.github.com/users/ArjunPimpale/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArjunPimpale/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T09:25:24
| 2025-10-23T13:58:24
| 2025-10-23T13:33:56
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41761",
"html_url": "https://github.com/huggingface/transformers/pull/41761",
"diff_url": "https://github.com/huggingface/transformers/pull/41761.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41761.patch",
"merged_at": "2025-10-23T13:33:56"
}
|
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the feature update mentioned in #41731
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Wauplin
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
As discussed in #41731
|
{
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41761/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41760
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41760/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41760/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41760/events
|
https://github.com/huggingface/transformers/issues/41760
| 3,535,679,561
|
I_kwDOCUB6oc7SvjBJ
| 41,760
|
`find_adapter_config_file()` got an unexpected keyword argument `_adapter_model_path`
|
{
"login": "avihu111",
"id": 39214195,
"node_id": "MDQ6VXNlcjM5MjE0MTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/39214195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avihu111",
"html_url": "https://github.com/avihu111",
"followers_url": "https://api.github.com/users/avihu111/followers",
"following_url": "https://api.github.com/users/avihu111/following{/other_user}",
"gists_url": "https://api.github.com/users/avihu111/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avihu111/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avihu111/subscriptions",
"organizations_url": "https://api.github.com/users/avihu111/orgs",
"repos_url": "https://api.github.com/users/avihu111/repos",
"events_url": "https://api.github.com/users/avihu111/events{/privacy}",
"received_events_url": "https://api.github.com/users/avihu111/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-21T09:12:15
| 2025-10-21T13:02:37
| null |
CONTRIBUTOR
| null | null | null | null |
### System Info
- `transformers` version: 4.57.0.dev0
- Platform: Linux-5.14.0-503.23.1.el9_5.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.15
- Huggingface_hub version: 1.0.0.rc5
- Safetensors version: 0.4.5
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.17.5
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: Yes
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
Hey @Cyrilvallez @ArthurZucker and @Rocketknight1
I tried loading Granite Speech 2B (a model with a LoRA adapter) with an updated transformers version (4.57.0.dev0). using the following code, which works in previous versions:
```
import torch
from transformers import AutoModelForSpeechSeq2Seq
model_name = "ibm-granite/granite-speech-3.3-2b"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name, device_map="cuda", torch_dtype=torch.bfloat16
)
```
and got the following error:
```
model = AutoModelForSpeechSeq2Seq.from_pretrained(
File "/proj/speech/home/avihu/git_repos/10sep/transformers/src/transformers/models/auto/auto_factory.py", line 385, in from_pretrained
return model_class.from_pretrained(
File "/proj/speech/home/avihu/git_repos/10sep/transformers/src/transformers/modeling_utils.py", line 270, in _wrapper
return func(*args, **kwargs)
File "/proj/speech/home/avihu/git_repos/10sep/transformers/src/transformers/modeling_utils.py", line 4549, in from_pretrained
model.load_adapter(
File "/proj/speech/home/avihu/git_repos/10sep/transformers/src/transformers/integrations/peft.py", line 221, in load_adapter
adapter_config_file = find_adapter_config_file(
TypeError: find_adapter_config_file() got an unexpected keyword argument '_adapter_model_path'
```
looks like `_adapter_model_path` is inserted [here](https://github.com/huggingface/transformers/blob/4e50b8459d981ddcbc9438e85cff8d83fe40a500/src/transformers/models/auto/auto_factory.py#L302C23-L302C24)
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoModelForSpeechSeq2Seq
model_name = "ibm-granite/granite-speech-3.3-2b"
model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_name, device_map="cuda", torch_dtype=torch.bfloat16
)
```
### Expected behavior
Return the model without crashing
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41760/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41759
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41759/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41759/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41759/events
|
https://github.com/huggingface/transformers/pull/41759
| 3,535,529,896
|
PR_kwDOCUB6oc6uyjeL
| 41,759
|
added moderation to text generation models
|
{
"login": "DeXtAr47-oss",
"id": 79273068,
"node_id": "MDQ6VXNlcjc5MjczMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/79273068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeXtAr47-oss",
"html_url": "https://github.com/DeXtAr47-oss",
"followers_url": "https://api.github.com/users/DeXtAr47-oss/followers",
"following_url": "https://api.github.com/users/DeXtAr47-oss/following{/other_user}",
"gists_url": "https://api.github.com/users/DeXtAr47-oss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeXtAr47-oss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeXtAr47-oss/subscriptions",
"organizations_url": "https://api.github.com/users/DeXtAr47-oss/orgs",
"repos_url": "https://api.github.com/users/DeXtAr47-oss/repos",
"events_url": "https://api.github.com/users/DeXtAr47-oss/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeXtAr47-oss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T08:29:59
| 2025-10-21T08:38:48
| 2025-10-21T08:38:48
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41759",
"html_url": "https://github.com/huggingface/transformers/pull/41759",
"diff_url": "https://github.com/huggingface/transformers/pull/41759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41759.patch",
"merged_at": null
}
|
What does this PR do?
Adds a minimal moderation subsystem and generation-time integration to support content filtering during text generation.
Introduces a small, opt-in API so callers can supply custom safety/classifier implementations that run during generation.
Fixes # (41740)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "DeXtAr47-oss",
"id": 79273068,
"node_id": "MDQ6VXNlcjc5MjczMDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/79273068?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeXtAr47-oss",
"html_url": "https://github.com/DeXtAr47-oss",
"followers_url": "https://api.github.com/users/DeXtAr47-oss/followers",
"following_url": "https://api.github.com/users/DeXtAr47-oss/following{/other_user}",
"gists_url": "https://api.github.com/users/DeXtAr47-oss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeXtAr47-oss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeXtAr47-oss/subscriptions",
"organizations_url": "https://api.github.com/users/DeXtAr47-oss/orgs",
"repos_url": "https://api.github.com/users/DeXtAr47-oss/repos",
"events_url": "https://api.github.com/users/DeXtAr47-oss/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeXtAr47-oss/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41759/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41758
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41758/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41758/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41758/events
|
https://github.com/huggingface/transformers/pull/41758
| 3,535,521,454
|
PR_kwDOCUB6oc6uyhpg
| 41,758
|
Fixed incorrect model_type for qwen2vl and qwen2.5vl when config is saved and loaded again
|
{
"login": "i3hz",
"id": 144821361,
"node_id": "U_kgDOCKHMcQ",
"avatar_url": "https://avatars.githubusercontent.com/u/144821361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/i3hz",
"html_url": "https://github.com/i3hz",
"followers_url": "https://api.github.com/users/i3hz/followers",
"following_url": "https://api.github.com/users/i3hz/following{/other_user}",
"gists_url": "https://api.github.com/users/i3hz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/i3hz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/i3hz/subscriptions",
"organizations_url": "https://api.github.com/users/i3hz/orgs",
"repos_url": "https://api.github.com/users/i3hz/repos",
"events_url": "https://api.github.com/users/i3hz/events{/privacy}",
"received_events_url": "https://api.github.com/users/i3hz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T08:27:38
| 2025-10-21T12:29:39
| 2025-10-21T10:54:58
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41758",
"html_url": "https://github.com/huggingface/transformers/pull/41758",
"diff_url": "https://github.com/huggingface/transformers/pull/41758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41758.patch",
"merged_at": "2025-10-21T10:54:58"
}
|
# What does this PR do?
Fixes the issue where if you save the config and load it again it would return the incorrect model_type .
Minor fix in __getattribute__ method of the config class for both models .
Fixes # 41746
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. - https://github.com/huggingface/transformers/issues/41746
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@zucchini-nlp
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41758/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41758/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41757
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41757/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41757/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41757/events
|
https://github.com/huggingface/transformers/pull/41757
| 3,535,176,443
|
PR_kwDOCUB6oc6uxg6x
| 41,757
|
Fix CUDA index out of bounds for q_idx in VLM token type masking for Gemma3, PaliGemma, and example modular
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T06:59:22
| 2025-10-22T09:33:12
| 2025-10-22T09:29:47
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41757",
"html_url": "https://github.com/huggingface/transformers/pull/41757",
"diff_url": "https://github.com/huggingface/transformers/pull/41757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41757.patch",
"merged_at": "2025-10-22T09:29:47"
}
|
Fix CUDA index out of bounds error that occurs during generation with static caches when using token type IDs for bidirectional image attention masking.
### Background
After PR
- #41505
changed cache initialization behavior in `generate()`, a **latent bug in the VLM masking code was exposed**. The error manifests as:
```python
CUDA error: device-side assert triggered
/pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:113: operator(): block: [0,0,0], thread: [0,0,0]
Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed.
```
### Bug
In the `token_type_ids_mask_function` inner mask, the code correctly handles out-of-bounds `kv_idx` values but **fails to handle out-of-bounds `q_idx` values**.
The PR
- #39396
originally fixed the bidirectional image masking by adding bounds checking for `kv_idx`, but overlooked that `q_idx` needed the same protection.
During generation with static caches:
- Cache shapes can exceed the actual input sequence length (e.g., static cache of 2048 positions with 512 token input)
- The masking function receives both `q_idx` and `kv_idx` that can exceed `token_type_ids.shape[1]`
- Direct indexing like `token_type_ids[batch_idx, q_idx]` causes CUDA index out of bounds errors when `q_idx >= token_type_ids.shape[1]`
The code comment on line 740 already acknowledged this issue:
> "NOTE: static cache shape goes beyond input seq length, while token_type_ids.shape[1] == input seq length"
Bounds checking was implemented for `kv_idx`, but `q_idx` was overlooked.
### Fix
This PR adds the same `torch.where` bounds-checking pattern for `q_idx` that already existed for `kv_idx`:
1. Create `safe_q_idx` to clamp indices within valid range
2. Use safe indices for tensor access
3. Apply `torch.where` to mask out-of-bounds values with appropriate defaults (0 for `token_type_ids`, -1 for `image_group_ids`)
### Affected Models
- `Gemma3ForConditionalGeneration`
- `PaliGemmaForConditionalGeneration`
- Example modular transformer template (`modeling_new_task_model.py`)
### Testing
This PR fixes the downstream failing test in TRL:
> tests/test_grpo_trainer.py::TestGRPOTrainer::test_training_vlm_0_trl_internal_testing_tiny_Gemma3ForConditionalGeneration
See associated issue:
- https://github.com/huggingface/trl/issues/4281
### Related Issues
- Regression exposed by: cache initialization refactor
- #41505
- This PR completes the fix started in (which fixed `kv_idx` but missed `q_idx`):
- #39396
- This PR will fix https://github.com/huggingface/trl/issues/4281
CC:
- @gante, who made the PR #41505
- @zucchini-nlp , who made the PR #39396
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41757/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41756
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41756/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41756/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41756/events
|
https://github.com/huggingface/transformers/issues/41756
| 3,535,052,851
|
I_kwDOCUB6oc7StKAz
| 41,756
|
InternVL3-8B quantize got TypeError: can only concatenate str (not "list") to str
|
{
"login": "BigFaceBoy",
"id": 12423597,
"node_id": "MDQ6VXNlcjEyNDIzNTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/12423597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigFaceBoy",
"html_url": "https://github.com/BigFaceBoy",
"followers_url": "https://api.github.com/users/BigFaceBoy/followers",
"following_url": "https://api.github.com/users/BigFaceBoy/following{/other_user}",
"gists_url": "https://api.github.com/users/BigFaceBoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigFaceBoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigFaceBoy/subscriptions",
"organizations_url": "https://api.github.com/users/BigFaceBoy/orgs",
"repos_url": "https://api.github.com/users/BigFaceBoy/repos",
"events_url": "https://api.github.com/users/BigFaceBoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigFaceBoy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-21T06:12:18
| 2025-10-21T09:31:52
| null |
NONE
| null | null | null | null |
### System Info
- `transformers` version: 4.56.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.31
- Python version: 3.12.4
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L40
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I try to quantize InterVL3-8B with llmcompressor.
The quantize code is:
```
# W4A16
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from transformers import AutoProcessor, AutoModel, AutoTokenizer
from qwen_vl_utils import process_vision_info
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.utils import dispatch_for_generation
# Load model.
model_id = "/root/ezviz/models/InternVL3-8B"
model = AutoModel.from_pretrained(model_id, torch_dtype=torch.bfloat16, trust_remote_code=True)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
DATASET_ID = "/root/ezviz/datasets/flickr30k"
DATASET_SPLIT = "test[:512]"
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
def preprocess_and_tokenize(example):
# preprocess
buffered = BytesIO()
example["image"].save(buffered, format="PNG")
encoded_image = base64.b64encode(buffered.getvalue())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": base64_qwen},
{"type": "text", "text": "What does the image show?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
# tokenize
return processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=False
)
ds = ds.map(preprocess_and_tokenize, remove_columns=ds.column_names)
def data_collator(batch):
assert len(batch) == 1
return {key: torch.tensor(value) for key, value in batch[0].items()}
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
scheme="W4A16",
ignore=["lm_head"]
),
]
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
sequential_targets=["InternLM2ForCausalLM"],
)
SAVE_DIR = "/root/ezviz/models/InternVL3-8B-W4A16"
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)
```
I run the code and got the error:
```
Traceback (most recent call last):
File "/root/ezviz/src/img_calib/w4a16.py", line 54, in <module>
ds = ds.map(preprocess_and_tokenize, remove_columns=ds.column_names)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3327, in map
for rank, done, content in Dataset._map_single(**unprocessed_kwargs):
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3659, in _map_single
for i, example in iter_outputs(shard_iterable):
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3633, in iter_outputs
yield i, apply_function(example, i, offset=offset)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3556, in apply_function
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ezviz/src/img_calib/w4a16.py", line 42, in preprocess_and_tokenize
text = processor.apply_chat_template(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/transformers/tokenization_utils_base.py", line 1640, in apply_chat_template
rendered_chat, generation_indices = render_jinja_template(
^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/transformers/utils/chat_template_utils.py", line 521, in render_jinja_template
rendered_chat = compiled_template.render(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/jinja2/environment.py", line 1295, in render
self.environment.handle_exception()
File "/root/miniforge3/envs/quantize/lib/python3.12/site-packages/jinja2/environment.py", line 942, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 23, in top-level template code
TypeError: can only concatenate str (not "list") to str
```
### Expected behavior
Expect quantize success
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41756/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41755
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41755/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41755/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41755/events
|
https://github.com/huggingface/transformers/pull/41755
| 3,534,536,028
|
PR_kwDOCUB6oc6uvk_r
| 41,755
|
Swap columns and rows of the grid layout in LFM2-VL
|
{
"login": "ankke",
"id": 48625325,
"node_id": "MDQ6VXNlcjQ4NjI1MzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/48625325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankke",
"html_url": "https://github.com/ankke",
"followers_url": "https://api.github.com/users/ankke/followers",
"following_url": "https://api.github.com/users/ankke/following{/other_user}",
"gists_url": "https://api.github.com/users/ankke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankke/subscriptions",
"organizations_url": "https://api.github.com/users/ankke/orgs",
"repos_url": "https://api.github.com/users/ankke/repos",
"events_url": "https://api.github.com/users/ankke/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankke/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-21T01:56:18
| 2025-10-22T12:52:06
| 2025-10-22T12:52:06
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41755",
"html_url": "https://github.com/huggingface/transformers/pull/41755",
"diff_url": "https://github.com/huggingface/transformers/pull/41755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41755.patch",
"merged_at": "2025-10-22T12:52:06"
}
|
Fixing swapped rows and columns that were resulting in incorrect positional special tokens. Fixed integration tests on the modeling file.
@zucchini-nlp
|
{
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41755/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41754
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41754/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41754/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41754/events
|
https://github.com/huggingface/transformers/pull/41754
| 3,534,404,484
|
PR_kwDOCUB6oc6uvI-_
| 41,754
|
Add pytree registration for static cache
|
{
"login": "angelayi",
"id": 10901756,
"node_id": "MDQ6VXNlcjEwOTAxNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/10901756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angelayi",
"html_url": "https://github.com/angelayi",
"followers_url": "https://api.github.com/users/angelayi/followers",
"following_url": "https://api.github.com/users/angelayi/following{/other_user}",
"gists_url": "https://api.github.com/users/angelayi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angelayi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angelayi/subscriptions",
"organizations_url": "https://api.github.com/users/angelayi/orgs",
"repos_url": "https://api.github.com/users/angelayi/repos",
"events_url": "https://api.github.com/users/angelayi/events{/privacy}",
"received_events_url": "https://api.github.com/users/angelayi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-21T00:30:07
| 2025-10-21T00:32:02
| null |
CONTRIBUTOR
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41754",
"html_url": "https://github.com/huggingface/transformers/pull/41754",
"diff_url": "https://github.com/huggingface/transformers/pull/41754.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41754.patch",
"merged_at": null
}
|
* Refactored the existing pytree registration for DynamicCache to be a more generalized form
* Also fixed the existing pytree registration which triggers [lazy_initialization](https://github.com/huggingface/transformers/blob/9aab965b1e61d92d402809bd467c317ec464e560/src/transformers/cache_utils.py#L94-L95). This shouldn't be needed because we already have all the contents of the dynamic cache created?
* Added pytree support for StaticCache
cc @tugsbayasgalan
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41754/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41753
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41753/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41753/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41753/events
|
https://github.com/huggingface/transformers/issues/41753
| 3,533,887,811
|
I_kwDOCUB6oc7SotlD
| 41,753
|
Please port for MPS acceleration on MacOS.
|
{
"login": "shyamalschandra",
"id": 9545735,
"node_id": "MDQ6VXNlcjk1NDU3MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9545735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shyamalschandra",
"html_url": "https://github.com/shyamalschandra",
"followers_url": "https://api.github.com/users/shyamalschandra/followers",
"following_url": "https://api.github.com/users/shyamalschandra/following{/other_user}",
"gists_url": "https://api.github.com/users/shyamalschandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shyamalschandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shyamalschandra/subscriptions",
"organizations_url": "https://api.github.com/users/shyamalschandra/orgs",
"repos_url": "https://api.github.com/users/shyamalschandra/repos",
"events_url": "https://api.github.com/users/shyamalschandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/shyamalschandra/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
open
| false
| null |
[] | null |
[] | 2025-10-20T20:46:51
| 2025-10-27T06:34:05
| null |
NONE
| null | null | null | null |
### Feature request
Please port for MPS acceleration on MacOS.
### Motivation
Please port for MPS acceleration on MacOS.
### Your contribution
I can give you a happy comment if you help.
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41753/timeline
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41752
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41752/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41752/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41752/events
|
https://github.com/huggingface/transformers/pull/41752
| 3,533,294,923
|
PR_kwDOCUB6oc6urV-t
| 41,752
|
Add pluggable safety hooks to text generation via optional safety_config
|
{
"login": "Tejassveer08",
"id": 147965837,
"node_id": "U_kgDOCNHHjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/147965837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tejassveer08",
"html_url": "https://github.com/Tejassveer08",
"followers_url": "https://api.github.com/users/Tejassveer08/followers",
"following_url": "https://api.github.com/users/Tejassveer08/following{/other_user}",
"gists_url": "https://api.github.com/users/Tejassveer08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tejassveer08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tejassveer08/subscriptions",
"organizations_url": "https://api.github.com/users/Tejassveer08/orgs",
"repos_url": "https://api.github.com/users/Tejassveer08/repos",
"events_url": "https://api.github.com/users/Tejassveer08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tejassveer08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-20T17:27:58
| 2025-10-21T06:25:29
| 2025-10-21T06:25:29
|
NONE
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41752",
"html_url": "https://github.com/huggingface/transformers/pull/41752",
"diff_url": "https://github.com/huggingface/transformers/pull/41752.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41752.patch",
"merged_at": null
}
|
# What does this PR do?
<!--
GenerationConfig: Added optional safety_config kwarg to carry a user-defined safety configuration or factory. Stopping criteria: Introduced SafetyCriteria to allow user-provided callbacks to stop generation based on input_ids/scores. GenerationMixin integration:
Construct and append safety processors from generation_config.safety_config.construct_processors(...) if available. Construct and append safety stopping criteria from generation_config.safety_config.construct_criteria() if available. Wrapped construction in try/except to preserve backward compatibility and avoid failing generation on safety construction errors.
-->
<!-- Remove if not applicable -->
Fixes #41740
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker @Cyrilvallez
- vision models: @yonigozlan @molbap
- audio models: @eustlb @ebezzam @vasqu
- multimodal models: @zucchini-nlp
- graph models: @clefourrier
Library:
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- continuous batching: @remi-or @ArthurZucker @McPatate
- pipelines: @Rocketknight1
- tokenizers: @ArthurZucker and @itazap
- trainer: @SunMarc
- attention: @vasqu @ArthurZucker @CyrilVallez
- model loading (from pretrained, etc): @CyrilVallez
- distributed: @3outeille @ArthurZucker
- CIs: @ydshieh
Integrations:
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization: @SunMarc @MekkCyber
- kernels: @MekkCyber @drbh
- peft: @BenjaminBossan @githubnemo
Devices/Backends:
- AMD ROCm: @ivarflakstad
- Intel XPU: @IlyasMoutawwakil
- Ascend NPU: @ivarflakstad
Documentation: @stevhliu
Research projects are not maintained and should be taken as is.
-->
|
{
"login": "Tejassveer08",
"id": 147965837,
"node_id": "U_kgDOCNHHjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/147965837?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tejassveer08",
"html_url": "https://github.com/Tejassveer08",
"followers_url": "https://api.github.com/users/Tejassveer08/followers",
"following_url": "https://api.github.com/users/Tejassveer08/following{/other_user}",
"gists_url": "https://api.github.com/users/Tejassveer08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tejassveer08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tejassveer08/subscriptions",
"organizations_url": "https://api.github.com/users/Tejassveer08/orgs",
"repos_url": "https://api.github.com/users/Tejassveer08/repos",
"events_url": "https://api.github.com/users/Tejassveer08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tejassveer08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41752/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41751
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41751/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41751/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41751/events
|
https://github.com/huggingface/transformers/pull/41751
| 3,532,890,842
|
PR_kwDOCUB6oc6up99-
| 41,751
|
Reinstate self.scaling in Gemma3nTextAttention
|
{
"login": "RyanMullins",
"id": 868555,
"node_id": "MDQ6VXNlcjg2ODU1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/868555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanMullins",
"html_url": "https://github.com/RyanMullins",
"followers_url": "https://api.github.com/users/RyanMullins/followers",
"following_url": "https://api.github.com/users/RyanMullins/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanMullins/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanMullins/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanMullins/subscriptions",
"organizations_url": "https://api.github.com/users/RyanMullins/orgs",
"repos_url": "https://api.github.com/users/RyanMullins/repos",
"events_url": "https://api.github.com/users/RyanMullins/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanMullins/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-10-20T15:27:45
| 2025-10-28T14:11:39
| null |
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41751",
"html_url": "https://github.com/huggingface/transformers/pull/41751",
"diff_url": "https://github.com/huggingface/transformers/pull/41751.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41751.patch",
"merged_at": null
}
|
# What does this PR do?
Previously, the `Gemma3nTextAttention` class removed the `self.scaling` property and passed 1.0 as a hard-coded value to the `attention_interface()` function.
This PR reinstates the `self.scaling` property, sets it to 1.0, and passes `self.scaling` to the `attention_interface()` function in `Gemma3nTextAttention.forward()`, which should make it more configurable for users interested in experimenting with the Gemma 3n arch, and improves the lineage of its modular inheritance relative to Gemma 3.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @Cyrilvallez
| null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41751/timeline
| null | null | null | null | true
| false
|
https://api.github.com/repos/huggingface/transformers/issues/41750
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41750/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41750/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41750/events
|
https://github.com/huggingface/transformers/pull/41750
| 3,532,767,118
|
PR_kwDOCUB6oc6upi48
| 41,750
|
:rotating_light: [`Clip`] Fix masking and enable flash attention on all model types
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5769473378,
"node_id": "LA_kwDOCUB6oc8AAAABV-MtYg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Vision",
"name": "Vision",
"color": "C079EF",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-20T14:53:33
| 2025-10-24T18:44:14
| 2025-10-24T18:44:10
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41750",
"html_url": "https://github.com/huggingface/transformers/pull/41750",
"diff_url": "https://github.com/huggingface/transformers/pull/41750.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41750.patch",
"merged_at": "2025-10-24T18:44:10"
}
|
Clip used old mask APIs leading to a confused usage:
- A causal mask (normal triu mask)
- A padding mask (encoder mask == only accounting for padding)
- Add both of above == final mask --> causal mask with padding
^ works only for interfaces with support for 4D masks which disabled FA usage in general.
This PR now correctly changes this to the new API which handles padding automatically. We have to additionally pass the `is_causal` kwarg to dynamically switch between modality types (text == causal, image == full). This is only enabled through recent PRs (fa #39707, sdpa #41692).
Closes #41673
Fixes #41668
|
{
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41750/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41749
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41749/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41749/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41749/events
|
https://github.com/huggingface/transformers/issues/41749
| 3,532,707,392
|
I_kwDOCUB6oc7SkNZA
| 41,749
|
`_get_num_multimodal_tokens` is not implemented for model `mllama`
|
{
"login": "mrtpk",
"id": 8076245,
"node_id": "MDQ6VXNlcjgwNzYyNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8076245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrtpk",
"html_url": "https://github.com/mrtpk",
"followers_url": "https://api.github.com/users/mrtpk/followers",
"following_url": "https://api.github.com/users/mrtpk/following{/other_user}",
"gists_url": "https://api.github.com/users/mrtpk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrtpk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrtpk/subscriptions",
"organizations_url": "https://api.github.com/users/mrtpk/orgs",
"repos_url": "https://api.github.com/users/mrtpk/repos",
"events_url": "https://api.github.com/users/mrtpk/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrtpk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-20T14:38:22
| 2025-10-21T09:58:49
| 2025-10-21T09:58:49
|
NONE
| null | null | null | null |
vLLM 0.11’s Transformers-backend expects the HF processor to implement a method called `_get_num_multimodal_tokens` which is [not implemented for mllama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mllama/processing_mllama.py) in `transformers 4.57.1`.
Because of this, `vllm serve meta-llama/Llama-3.2-11B-Vision` fails on `vllm 0.11.0`. It works on `vllm 0.10.2`.
The error is `MllamaProcessor' object has no attribute '_get_num_multimodal_tokens`.
## Related
https://github.com/vllm-project/vllm/issues/27198
### Who can help?
Tagging @yonigozlan @molbap @zucchini-nlp for input — happy to implement the method if no one’s on it yet, and I’d appreciate your guidance.
### Reproduction
```
from transformers import AutoProcessor
proc = AutoProcessor.from_pretrained("meta-llama/Llama-3.2-11B-Vision-Instruct")
print(hasattr(proc, "_get_num_multimodal_tokens")) # should be True but not
```
### Expected behavior
Implement `_get_num_multimodal_tokens` as it is implemented for other models in `./src/transformers/models/` (like `gemma3`.
## Useful links
* https://huggingface.co/docs/transformers/main/en/transformers_as_backend#multimodal-models
* https://blog.vllm.ai/2025/04/11/transformers-backend.html
|
{
"login": "mrtpk",
"id": 8076245,
"node_id": "MDQ6VXNlcjgwNzYyNDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8076245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrtpk",
"html_url": "https://github.com/mrtpk",
"followers_url": "https://api.github.com/users/mrtpk/followers",
"following_url": "https://api.github.com/users/mrtpk/following{/other_user}",
"gists_url": "https://api.github.com/users/mrtpk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrtpk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrtpk/subscriptions",
"organizations_url": "https://api.github.com/users/mrtpk/orgs",
"repos_url": "https://api.github.com/users/mrtpk/repos",
"events_url": "https://api.github.com/users/mrtpk/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrtpk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41749/timeline
| null |
completed
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| false
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41748
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41748/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41748/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41748/events
|
https://github.com/huggingface/transformers/pull/41748
| 3,532,636,827
|
PR_kwDOCUB6oc6upGc7
| 41,748
|
Reduce warning noise caused by Tensor.new_tensor
|
{
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-10-20T14:18:22
| 2025-10-21T11:55:07
| 2025-10-21T11:54:13
|
CONTRIBUTOR
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41748",
"html_url": "https://github.com/huggingface/transformers/pull/41748",
"diff_url": "https://github.com/huggingface/transformers/pull/41748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41748.patch",
"merged_at": "2025-10-21T11:54:13"
}
|
# What does this PR do?
This PR replaces `Tensor.new_tensor()` calls to suppress warning that users cannot address. This allows users to avoid being confused by warnings they cannot act on, making it easier for them to focus on meaningful warnings.
More specifically, when calling the `EncoderDecoderModel` forward method, users see warning like:
```py
from transformers import EncoderDecoderModel, AutoTokenizer
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
model.config.decoder_start_token_id = tok.cls_token_id
model.config.pad_token_id = tok.pad_token_id
src = tok("hello world", return_tensors="pt")
tgt = tok("hi there", return_tensors="pt").input_ids
labels = tgt.clone()
labels[labels == tok.pad_token_id] = -100
out = model(
input_ids=src["input_ids"],
attention_mask=src["attention_mask"],
labels=labels,
)
```
```sh
/home/shutotakahashi/projects/transformers-uv/transformers/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py:453:
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.detach().clone() or sourceTensor.detach().clone().requires_grad_(True), rather than tensor.new_tensor(sourceTensor).
decoder_attention_mask = decoder_input_ids.new_tensor(decoder_input_ids != self.config.pad_token_id)
```
This warning is triggered by the use of `Tensor.new_tensor()` in the internal code and cannot be resolved by users.
The fix is functionally equivalent because `new_tensor()` creates a new tensor with the same dtype as the original tensor. Additionally, I think the fixed version is more semantically clear as it explicitly shows that `decoder_attention_mask` is created through tensor comparison followed by type casting.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- text models: @ArthurZucker @Cyrilvallez
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41748/timeline
| null | null | null | null | true
| true
|
https://api.github.com/repos/huggingface/transformers/issues/41747
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/41747/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/41747/comments
|
https://api.github.com/repos/huggingface/transformers/issues/41747/events
|
https://github.com/huggingface/transformers/pull/41747
| 3,532,498,036
|
PR_kwDOCUB6oc6uooUC
| 41,747
|
Remove invalid `@staticmethod` from module-level get_device_and_memory_breakdown
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-10-20T13:41:19
| 2025-10-22T12:53:34
| 2025-10-22T08:52:29
|
MEMBER
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/41747",
"html_url": "https://github.com/huggingface/transformers/pull/41747",
"diff_url": "https://github.com/huggingface/transformers/pull/41747.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/41747.patch",
"merged_at": "2025-10-22T08:52:29"
}
|
Remove invalid staticmethod decorator from module-level function `get_device_and_memory_breakdown`.
This PR removes an incorrect @staticmethod decorator applied to the module-level function `get_device_and_memory_breakdown`.
The decorator caused a runtime error on Python 3.9, even though it appeared to work fine on newer Python versions.
See `trl` stacktrace for Python 3.9: https://github.com/huggingface/trl/actions/runs/18605452485/job/53053774127
```python
ERROR ContinuousBatchingLogger:continuous_api.py:879 Error in generation loop: 'staticmethod' object is not callable
Traceback (most recent call last):
File "/__w/trl/trl/.venv/lib/python3.9/site-packages/transformers/generation/continuous_batching/continuous_api.py", line 837, in _run_generation_loop
paged_attention_cache = PagedAttentionCache(
File "/__w/trl/trl/.venv/lib/python3.9/site-packages/transformers/generation/continuous_batching/cache.py", line 191, in __init__
num_blocks, max_batch_tokens = memory_handler.infer_num_blocks_and_max_batch_tokens(
File "/__w/trl/trl/.venv/lib/python3.9/site-packages/transformers/generation/continuous_batching/cache.py", line 437, in infer_num_blocks_and_max_batch_tokens
num_blocks, max_batch_tokens = self.compute_num_blocks_and_max_batch_tokens(
File "/__w/trl/trl/.venv/lib/python3.9/site-packages/transformers/generation/continuous_batching/cache.py", line 476, in compute_num_blocks_and_max_batch_tokens
cache_memory = self.get_available_memory(max_memory_percent)
File "/__w/trl/trl/.venv/lib/python3.9/site-packages/transformers/generation/continuous_batching/cache.py", line 411, in get_available_memory
_, total, reserved, allocated = get_device_and_memory_breakdown()
TypeError: 'staticmethod' object is not callable
```
### Root cause
In Python 3.9 and earlier, `@staticmethod` produces a descriptor object that is not directly callable when defined outside a class.
Starting with Python 3.10, CPython changed the behavior of `staticmethod`: staticmethod objects gained a `__call__` method that delegates to the wrapped function, making them callable even outside a class context. However, this masked the underlying issue in newer Python versions.
### Fix
Remove the invalid decorator and leave the function as a normal callable at module scope.
This PR will fix https://github.com/huggingface/trl/issues/4308
CC: @remi-or, who created the original PR:
- https://github.com/huggingface/transformers/pull/40426
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/41747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/41747/timeline
| null | null | null | null | true
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.