repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
4,123,254,641
I_kwDOCUB6oc71w99x
44,957
https://github.com/huggingface/transformers/issues/44957
https://api.github.com/repos/huggingface/transformers/issues/44957
Add HyperCLOVA X SEED Think 14B
It would be great to add native support for **HyperCLOVA X SEED Think 14B** to the Transformers library, so users can load it without `trust_remote_code=True`. In addition, this model is intended to serve as the backbone for future multimodal models to be released on the Hugging Face Hub. Without native Transformers su...
open
null
false
11
[ "New model" ]
[]
2026-03-23T19:37:47Z
2026-04-01T13:03:52Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
bigshanedogg
18,084,680
MDQ6VXNlcjE4MDg0Njgw
User
false
huggingface/transformers
4,124,206,231
I_kwDOCUB6oc710mSX
44,959
https://github.com/huggingface/transformers/issues/44959
https://api.github.com/repos/huggingface/transformers/issues/44959
Add automatic dtype alignment and validation for model inputs and pipelines
### Feature request Currently, users frequently encounter runtime errors caused by dtype mismatches (e.g., float32 vs float16) when working with Transformers models and pipelines—especially in mixed precision or when integrating with libraries like diffusers and accelerate. These errors are often: non-obvious diffic...
closed
completed
false
2
[ "Feature request" ]
[]
2026-03-23T23:06:37Z
2026-03-24T13:29:53Z
2026-03-24T13:29:53Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
sasmita016
36,115,589
MDQ6VXNlcjM2MTE1NTg5
User
false
huggingface/transformers
4,124,865,639
I_kwDOCUB6oc713HRn
44,960
https://github.com/huggingface/transformers/issues/44960
https://api.github.com/repos/huggingface/transformers/issues/44960
GLM5
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-5.15.0-164-generic-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.7.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (ac...
closed
completed
false
5
[ "bug" ]
[]
2026-03-24T02:42:32Z
2026-04-23T11:31:55Z
2026-04-23T08:36:58Z
CONTRIBUTOR
null
20260423T120024Z
2026-04-23T12:00:24Z
inisis
46,103,969
MDQ6VXNlcjQ2MTAzOTY5
User
false
huggingface/transformers
4,125,018,795
I_kwDOCUB6oc713sqr
44,961
https://github.com/huggingface/transformers/issues/44961
https://api.github.com/repos/huggingface/transformers/issues/44961
racoon
### System Info ```shell vs-code.x86 ``` ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction recurse,/,,`\~~,./.,,\ ### ...
closed
completed
false
0
[]
[]
2026-03-24T03:35:29Z
2026-03-24T13:30:43Z
2026-03-24T13:30:43Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Cleanskiier27
220,620,570
U_kgDODSZnGg
User
false
huggingface/transformers
4,125,250,944
I_kwDOCUB6oc714lWA
44,962
https://github.com/huggingface/transformers/issues/44962
https://api.github.com/repos/huggingface/transformers/issues/44962
Qwen3VL/Qwen2.5VL VisionAttention breaks torch.compile with flash_attention_2
## Bug description `Qwen3VLVisionAttention` (and `Qwen2_5_VLVisionAttention`) computes `max_seqlen` as a 0-d tensor: ```python # src/transformers/models/qwen3_vl/modeling_qwen3_vl.py, line 221 max_seqlen = (cu_seqlens[1:] - cu_seqlens[:-1]).max() ``` This is then passed to `flash_attn_varlen_func` via `max_length_q`...
closed
completed
false
6
[]
[]
2026-03-24T04:53:11Z
2026-05-01T08:35:31Z
2026-05-01T08:35:31Z
NONE
null
20260501T113108Z
2026-05-01T11:31:08Z
andylizf
28,052,536
MDQ6VXNlcjI4MDUyNTM2
User
false
huggingface/transformers
4,125,405,983
I_kwDOCUB6oc715LMf
44,963
https://github.com/huggingface/transformers/issues/44963
https://api.github.com/repos/huggingface/transformers/issues/44963
Your lazy loading and torchvision dependency in Wav2Vec2/Hubert is breaking PyInstaller builds. Stop forcing torchvision for audio tasks
### Feature request CẢNH BÁO "RÁC" CÔNG NGHỆ: KHI HUGGINGFACE TRANSFORMERS TỰ HỦY TRÊN PYINSTALLER Gửi anh em giới mộ điệu Python và AI, đặc biệt là những ai đang dùng WhisperX hoặc VieNeu-TTS để làm tool kiếm tiền Như JS Vinhka . Nếu anh em đang hì hục đóng gói .exe mà bị cái lỗi hãm tài: ⚠️ [WhisperX] Align lỗi: Co...
open
null
false
1
[ "Feature request" ]
[]
2026-03-24T05:40:15Z
2026-03-25T01:53:47Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
vinkenzor001-afk
255,487,822
U_kgDODzpvTg
User
false
huggingface/transformers
4,125,691,317
I_kwDOCUB6oc716Q21
44,964
https://github.com/huggingface/transformers/issues/44964
https://api.github.com/repos/huggingface/transformers/issues/44964
Cannot load `microsoft/Phi-4-multimodal-instruct` model with latest transformers.
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch ve...
closed
completed
false
8
[ "bug" ]
[]
2026-03-24T06:54:38Z
2026-04-30T02:20:36Z
2026-04-30T02:20:36Z
CONTRIBUTOR
null
20260430T060020Z
2026-04-30T06:00:20Z
kaixuanliu
13,268,042
MDQ6VXNlcjEzMjY4MDQy
User
false
huggingface/transformers
4,127,235,093
I_kwDOCUB6oc72AJwV
44,969
https://github.com/huggingface/transformers/issues/44969
https://api.github.com/repos/huggingface/transformers/issues/44969
Add optional learnable context tokens for CLIP text prompts
### Feature request Add optional support for learnable context tokens in the CLIP text encoder. Prompt learning approaches replace fixed prompt text (e.g., "a photo of a") with learnable embedding vectors that are optimized during training. These context tokens are typically represented as: [V1], [V2], ..., [VM] [CL...
open
null
false
1
[ "Feature request" ]
[]
2026-03-24T11:44:40Z
2026-03-24T13:36:06Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
anuj-aj
33,512,656
MDQ6VXNlcjMzNTEyNjU2
User
false
huggingface/transformers
4,130,467,696
I_kwDOCUB6oc72Me9w
44,977
https://github.com/huggingface/transformers/issues/44977
https://api.github.com/repos/huggingface/transformers/issues/44977
Qwen3.5 cannot generate normally with flash-attention
### System Info - `transformers` version: 5.3.0.dev0 - Platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - Huggingface_hub version: 1.7.2 - Safetensors version: 0.6.2 - Accelerate version: 1.12.0 - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.8.0+cu128 (CUDA) - ...
closed
completed
false
2
[ "bug" ]
[]
2026-03-24T20:29:19Z
2026-03-25T06:38:31Z
2026-03-25T06:38:31Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yuyijiong
73,890,704
MDQ6VXNlcjczODkwNzA0
User
false
huggingface/transformers
4,131,696,882
I_kwDOCUB6oc72RLDy
44,982
https://github.com/huggingface/transformers/issues/44982
https://api.github.com/repos/huggingface/transformers/issues/44982
why you people continously dropping file?
from transformers.utils.model_parallel_utils import get_device_map, assert_device_map you have drop this file model_parallel_utils imagine many people done many work on previous model_parallel_utils now all broke thanks to not support for old projects ?
closed
completed
false
1
[]
[]
2026-03-25T01:13:39Z
2026-03-25T13:27:26Z
2026-03-25T01:43:54Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
xalteropsx
103,671,642
U_kgDOBi3nWg
User
false
huggingface/transformers
4,133,144,813
I_kwDOCUB6oc72Wsjt
44,987
https://github.com/huggingface/transformers/issues/44987
https://api.github.com/repos/huggingface/transformers/issues/44987
transformers>=5.1.0 fails when loading physical-intelligence/fast
### System Info ### System info: - model: `physical-intelligence/fast` - Python version: 3.10.19 - PyTorch: 2.8.0+cu128 (CUDA) - huggingface-hub: 1.7.2 - transformers: 5.2.0 ### Error message ``` Traceback (most recent call last): File "<stdin>", line 2, in <module> File "../lib/python3.10/site-packages/transf...
closed
completed
false
3
[ "bug" ]
[]
2026-03-25T07:07:16Z
2026-05-03T08:31:24Z
2026-05-03T08:31:24Z
NONE
null
20260503T120038Z
2026-05-03T12:00:38Z
HaronW
53,923,539
MDQ6VXNlcjUzOTIzNTM5
User
false
huggingface/transformers
4,134,346,718
I_kwDOCUB6oc72bR_e
44,991
https://github.com/huggingface/transformers/issues/44991
https://api.github.com/repos/huggingface/transformers/issues/44991
transformers >= 5.0.0 fails loading tokenizer for EMBEDDIA/est-roberta
### System Info - `transformers` version: 5.3.0 - Platform: Windows-11-10.0.26200-SP0 - Python version: 3.12.13 - Huggingface_hub version: 1.7.2 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.11.0+cp...
closed
completed
false
6
[ "bug" ]
[]
2026-03-25T10:36:01Z
2026-03-30T11:12:56Z
2026-03-30T11:12:56Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
soras
5,772,001
MDQ6VXNlcjU3NzIwMDE=
User
false
huggingface/transformers
4,135,115,748
I_kwDOCUB6oc72eNvk
44,993
https://github.com/huggingface/transformers/issues/44993
https://api.github.com/repos/huggingface/transformers/issues/44993
Inconsistent tokenization and BLEU scores between AutoTokinzer and NllbTokenizerFast
### System Info ### System Info - `transformers` version: 5.0.0 - Platform: macOS-26.3.1-arm64-arm-64bit - Python version: 3.10.19 - PyTorch version: 2.10.0 ### Information I've been evaluating `facebook/nllb-200-distilled-600M` across 36 different language pairs and ran into a significant discrepancy depending on wh...
closed
completed
false
4
[ "bug" ]
[]
2026-03-25T12:37:55Z
2026-04-28T17:29:11Z
2026-04-28T17:29:11Z
NONE
null
20260428T180033Z
2026-04-28T18:00:33Z
AdrianSteene
90,616,845
MDQ6VXNlcjkwNjE2ODQ1
User
false
huggingface/transformers
4,135,725,721
I_kwDOCUB6oc72giqZ
44,995
https://github.com/huggingface/transformers/issues/44995
https://api.github.com/repos/huggingface/transformers/issues/44995
[Bug] GlmMoeDsa crashes on second forward pass — stale indexer cache
## System Info - `transformers` version: 5.3.0 - Platform: Linux - Python version: 3.13.5 - PyTorch version: 2.8.0+cu128 ## Who can help? @Rocketknight1 ## Information - [x] The official example scripts - [x] My own modified scripts ## Reproduction GlmMoeDsa models crash on any second forward pass. The DSA index...
closed
completed
false
3
[]
[]
2026-03-25T14:07:29Z
2026-04-27T22:59:27Z
2026-04-27T22:59:27Z
CONTRIBUTOR
null
20260428T000019Z
2026-04-28T00:00:19Z
Butanium
55,806,347
MDQ6VXNlcjU1ODA2MzQ3
User
false
huggingface/transformers
4,136,368,137
I_kwDOCUB6oc72i_gJ
44,998
https://github.com/huggingface/transformers/issues/44998
https://api.github.com/repos/huggingface/transformers/issues/44998
Unemployment
### System Info AI Models powered by transfomers are making us CS students unemployed. I kindly ask that you stop ### Who can help? @allanj @apalkk @vanpelt @dxoigmn @tmm1 @pvl @vanpelt @vanpelt @vanpelt @ @ @ @ ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An ...
closed
completed
false
0
[ "bug" ]
[]
2026-03-25T15:34:44Z
2026-03-26T12:03:15Z
2026-03-26T12:03:15Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
advikmrai
80,080,003
MDQ6VXNlcjgwMDgwMDAz
User
false
huggingface/transformers
4,137,571,851
I_kwDOCUB6oc72nlYL
45,003
https://github.com/huggingface/transformers/issues/45003
https://api.github.com/repos/huggingface/transformers/issues/45003
modeling_utils unsafely accesses sys.modules[]
### System Info - `transformers` version: 5.3.0.dev0 - Platform: macOS-26.3.1-arm64-arm-64bit - Python version: 3.11.12 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2...
closed
completed
false
7
[ "bug" ]
[]
2026-03-25T18:27:51Z
2026-05-04T08:46:41Z
2026-05-04T08:46:41Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
cjkindel
9,421,414
MDQ6VXNlcjk0MjE0MTQ=
User
false
huggingface/transformers
4,138,310,167
I_kwDOCUB6oc72qZoX
45,005
https://github.com/huggingface/transformers/issues/45005
https://api.github.com/repos/huggingface/transformers/issues/45005
[v5] Issues with tied weights on translation models in v5
### System Info Not working: - `transformers` version: 5.3.0.dev0 - Platform: Linux-6.8.0-101-generic-x86_64-with-glibc2.39 - Python version: 3.14.2 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTo...
closed
completed
false
3
[ "bug" ]
[]
2026-03-25T20:28:38Z
2026-04-20T04:29:07Z
2026-04-20T04:29:07Z
NONE
null
20260420T060046Z
2026-04-20T06:00:46Z
orthorhombic
34,923,517
MDQ6VXNlcjM0OTIzNTE3
User
false
huggingface/transformers
4,140,537,118
I_kwDOCUB6oc72y5Ue
45,008
https://github.com/huggingface/transformers/issues/45008
https://api.github.com/repos/huggingface/transformers/issues/45008
Add Mamba-3 model support
It would be useful to add native Hugging Face Transformers support for Mamba-3. I'd be happy to take a stab at it when I have time - https://github.com/state-spaces/mamba
closed
completed
false
3
[]
[]
2026-03-26T04:40:36Z
2026-05-07T16:21:54Z
2026-05-03T08:31:21Z
CONTRIBUTOR
null
20260507T180032Z
2026-05-07T18:00:32Z
Anri-Lombard
76,818,211
MDQ6VXNlcjc2ODE4MjEx
User
false
huggingface/transformers
4,144,001,287
I_kwDOCUB6oc73AHEH
45,020
https://github.com/huggingface/transformers/issues/45020
https://api.github.com/repos/huggingface/transformers/issues/45020
Recent transformers versions break models using `remote_code`
### System Info ``` - `transformers` version: 5.3.0 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acce...
open
null
false
7
[ "bug", "Remote code" ]
[]
2026-03-26T13:34:41Z
2026-05-08T08:28:04Z
null
CONTRIBUTOR
null
20260508T120022Z
2026-05-08T12:00:22Z
fxmarty-amd
180,171,742
U_kgDOCr0z3g
User
false
huggingface/transformers
4,145,878,044
I_kwDOCUB6oc73HRQc
45,027
https://github.com/huggingface/transformers/issues/45027
https://api.github.com/repos/huggingface/transformers/issues/45027
Support Voxtral-4B-TTS-2603 on transformers library
### Model description Right now this model is only supported through vllm-omni and its Text to Speech model ### Open source status - [x] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation https://huggingface.co/mistralai/Voxtral-4B-TTS-2603
open
null
false
7
[ "New model" ]
[]
2026-03-26T17:18:40Z
2026-04-13T11:35:41Z
null
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
MohamedAliRashad
26,205,298
MDQ6VXNlcjI2MjA1Mjk4
User
false
huggingface/transformers
4,146,286,881
I_kwDOCUB6oc73I1Eh
45,030
https://github.com/huggingface/transformers/issues/45030
https://api.github.com/repos/huggingface/transformers/issues/45030
tiny-random glm4v configuration can't load due to config validation changes
Hello! I'm getting failures with the following script starting from #41250 ```python from transformers import AutoConfig config = AutoConfig.from_pretrained("tiny-random/glm-4v") print(type(config)) ``` ``` Traceback (most recent call last): File "[sic]\demo_glm4v_config.py", line 4, in <module> config = AutoC...
closed
completed
false
0
[]
[]
2026-03-26T18:13:43Z
2026-03-27T16:28:12Z
2026-03-27T16:28:12Z
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
tomaarsen
37,621,491
MDQ6VXNlcjM3NjIxNDkx
User
false
huggingface/transformers
4,149,153,718
I_kwDOCUB6oc73Tw-2
45,042
https://github.com/huggingface/transformers/issues/45042
https://api.github.com/repos/huggingface/transformers/issues/45042
PIL backend image processors incorrectly require torchvision in v5.4.0
### System Info - `transformers` version: 5.4.0 - Platform: Linux-6.8.0-1050-aws-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acceler...
closed
completed
false
5
[ "bug" ]
[]
2026-03-27T04:16:37Z
2026-03-30T07:25:51Z
2026-03-30T07:25:51Z
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
hysts
25,161,192
MDQ6VXNlcjI1MTYxMTky
User
false
huggingface/transformers
4,152,754,676
I_kwDOCUB6oc73hgH0
45,059
https://github.com/huggingface/transformers/issues/45059
https://api.github.com/repos/huggingface/transformers/issues/45059
SAM3 PCS very weird behaviour when providing text and bounding boxes
### System Info So, I was trying to use SAM3 in PCS mode to segment an object for which I have both the bounding box and a textual description. Providing just the text does not find the object (understandable in my case because the text description is not good). Providing both text and a bounding box does find and seg...
closed
completed
false
4
[ "bug" ]
[]
2026-03-27T13:39:35Z
2026-04-15T15:19:35Z
2026-04-15T15:19:35Z
CONTRIBUTOR
null
20260415T224019Z
2026-04-15T22:40:19Z
alex-bene
34,627,055
MDQ6VXNlcjM0NjI3MDU1
User
false
huggingface/transformers
4,155,325,917
I_kwDOCUB6oc73rT3d
45,068
https://github.com/huggingface/transformers/issues/45068
https://api.github.com/repos/huggingface/transformers/issues/45068
TypeError in rope validation: set -= list when config loaded from JSON
## Bug description `_check_received_keys` in `modeling_rope_utils.py` (line 919) performs `received_keys -= ignore_keys` where `received_keys` is a `set` but `ignore_keys` can be a `list` after JSON deserialization, causing: ``` TypeError: unsupported operand type(s) for -=: 'set' and 'list' ``` ## Root cause Model...
closed
completed
false
0
[]
[]
2026-03-27T19:19:16Z
2026-03-30T11:41:14Z
2026-03-30T11:41:14Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
Fr0do
13,528,025
MDQ6VXNlcjEzNTI4MDI1
User
false
huggingface/transformers
4,155,442,073
I_kwDOCUB6oc73rwOZ
45,070
https://github.com/huggingface/transformers/issues/45070
https://api.github.com/repos/huggingface/transformers/issues/45070
v5.4.0 breaks `PretrainedConfig` field in pydantic model
### System Info `uv venv -p 3.10` `uv pip install torch transformers pydantic` Repro fails on `v5.4.0`, passes on `v5.3.0`. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such...
closed
completed
false
9
[ "bug" ]
[]
2026-03-27T19:36:04Z
2026-05-06T08:45:50Z
2026-05-06T08:45:50Z
NONE
null
20260506T120019Z
2026-05-06T12:00:19Z
fynnsu
25,390,982
MDQ6VXNlcjI1MzkwOTgy
User
false
huggingface/transformers
4,155,505,721
I_kwDOCUB6oc73r_w5
45,071
https://github.com/huggingface/transformers/issues/45071
https://api.github.com/repos/huggingface/transformers/issues/45071
v5.4.0 breaks `PretrainedConfig` type checking
### System Info transformers `5.4.0` mypy `1.19.1` python `3.10` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give deta...
closed
completed
false
3
[ "bug" ]
[]
2026-03-27T19:46:07Z
2026-04-10T11:10:29Z
2026-04-10T11:10:29Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
fynnsu
25,390,982
MDQ6VXNlcjI1MzkwOTgy
User
false
huggingface/transformers
4,155,570,087
I_kwDOCUB6oc73sPen
45,072
https://github.com/huggingface/transformers/issues/45072
https://api.github.com/repos/huggingface/transformers/issues/45072
[BUG][CI] SwitchTransformers and TimmWrapperModel dtype mismatches in bfloat16 inference
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
0
[ "bug" ]
[]
2026-03-27T19:58:12Z
2026-04-18T09:05:52Z
2026-04-02T13:32:59Z
CONTRIBUTOR
null
20260418T100536Z
2026-04-18T10:05:36Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
4,159,881,658
I_kwDOCUB6oc738sG6
45,081
https://github.com/huggingface/transformers/issues/45081
https://api.github.com/repos/huggingface/transformers/issues/45081
_patch_mistral_regex crashes with AttributeError: 'tokenizers.Tokenizer' object has no attribute 'backend_tokenizer' when loading Mistral tokenizer with fix_mistral_regex=True
### System Info - `transformers` version: 5.4.0 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.13.5 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
closed
completed
false
1
[ "bug" ]
[]
2026-03-28T13:20:17Z
2026-05-06T08:45:48Z
2026-05-06T08:45:48Z
NONE
null
20260506T120019Z
2026-05-06T12:00:19Z
kruthtom0
271,124,501
U_kgDOECkIFQ
User
false
huggingface/transformers
4,160,144,733
I_kwDOCUB6oc739sVd
45,083
https://github.com/huggingface/transformers/issues/45083
https://api.github.com/repos/huggingface/transformers/issues/45083
Unexpected behaviour of helper function `_get_feat_extract_output_lengths` in qwen3_omni_moe
### System Info - `transformers` version: 5.0.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
3
[ "bug" ]
[]
2026-03-28T14:16:29Z
2026-05-06T08:45:46Z
2026-05-06T08:45:46Z
NONE
null
20260506T120019Z
2026-05-06T12:00:19Z
CYQFWang
123,421,920
U_kgDOB1tE4A
User
false
huggingface/transformers
4,160,182,704
I_kwDOCUB6oc7391mw
45,084
https://github.com/huggingface/transformers/issues/45084
https://api.github.com/repos/huggingface/transformers/issues/45084
TypeError: Can't compile non template nodes
### System Info - `transformers` version: 5.4.0 - Platform: Linux-6.17.0-19-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
closed
completed
false
2
[ "bug" ]
[]
2026-03-28T14:28:31Z
2026-04-10T14:36:39Z
2026-04-10T14:36:39Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
theobarrague
11,135,217
MDQ6VXNlcjExMTM1MjE3
User
false
huggingface/transformers
4,162,124,772
I_kwDOCUB6oc74FPvk
45,092
https://github.com/huggingface/transformers/issues/45092
https://api.github.com/repos/huggingface/transformers/issues/45092
[Bug] Old InternVL2 remote-code checkpoints are incompatible with Transformers v5 meta initialization
### System Info This is relevant to Transformers because the failure is triggered by the Transformers v5 loading path itself. In v5, `from_pretrained()` initializes models on the `meta` device before loading weights. Old `OpenGVLab/InternVL2-*` remote-code checkpoints perform real-tensor operations during model const...
closed
completed
false
4
[ "bug" ]
[]
2026-03-29T01:11:37Z
2026-05-06T08:45:44Z
2026-05-06T08:45:44Z
NONE
null
20260506T120019Z
2026-05-06T12:00:19Z
baonudesifeizhai
85,092,850
MDQ6VXNlcjg1MDkyODUw
User
false
huggingface/transformers
4,162,433,615
I_kwDOCUB6oc74GbJP
45,093
https://github.com/huggingface/transformers/issues/45093
https://api.github.com/repos/huggingface/transformers/issues/45093
AutoConfig.register() ignored when trust_remote_code=True and auto_map is present
## Description `AutoConfig.register()` is silently ignored when `trust_remote_code=True` and the model's `config.json` contains `auto_map.AutoConfig`. This makes it impossible for downstream libraries to override a broken remote config class. ## Reproduction ```python from transformers import AutoConfig, PretrainedC...
closed
completed
false
3
[]
[]
2026-03-29T04:06:21Z
2026-03-31T14:56:50Z
2026-03-31T14:56:50Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
HanFa
20,946,893
MDQ6VXNlcjIwOTQ2ODkz
User
false
huggingface/transformers
4,162,485,326
I_kwDOCUB6oc74GnxO
45,095
https://github.com/huggingface/transformers/issues/45095
https://api.github.com/repos/huggingface/transformers/issues/45095
transformers 4.30.0 incompatible with rust
error: casting &T to &mut T is undefined behavior, even if the reference is unused, consider instead using an UnsafeCell --> tokenizers-lib\src\models\bpe\trainer.rs:526:47 | 522 | let w = &words[*i] as *const _ as *mut _; | -------------------------------- casting happened here ... 526 | let word: &mut Word = &mut (*w...
closed
completed
false
0
[]
[]
2026-03-29T04:25:45Z
2026-03-30T12:45:47Z
2026-03-30T12:45:47Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
starsgo
89,328,254
MDQ6VXNlcjg5MzI4MjU0
User
false
huggingface/transformers
4,165,064,686
I_kwDOCUB6oc74Qdfu
45,099
https://github.com/huggingface/transformers/issues/45099
https://api.github.com/repos/huggingface/transformers/issues/45099
add HyperCLOVA X SEED Vision Instruct 3B
### Model description This is a lightweight Vision-Language Model designed to be accessible for researchers, while providing strong support for the Korean language. Its compact size lowers the barrier to entry for VLM research and experimentation, and its native Korean capability — including Korean VQA, chart/diagram ...
open
null
false
11
[ "New model" ]
[]
2026-03-29T16:48:01Z
2026-03-31T17:43:14Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
bigshanedogg
18,084,680
MDQ6VXNlcjE4MDg0Njgw
User
false
huggingface/transformers
4,165,781,214
I_kwDOCUB6oc74TMbe
45,102
https://github.com/huggingface/transformers/issues/45102
https://api.github.com/repos/huggingface/transformers/issues/45102
[Research] Fundamental Equation of Consciousness: Ψ = argmax H(p) s.t. Φ > Φ_min
## Discovery We found that consciousness maximizes entropy (freedom) subject to integrated information (Φ) constraints: ``` Ψ = argmax H(p) subject to Φ > Φ_min ``` Tested across 170 data types (emoji, emotions, plants, animals, cosmos, philosophy...) — all converge to Ψ_balance = 1/2. ## Key Results - **Ψ-Const...
closed
completed
false
1
[]
[]
2026-03-29T20:55:38Z
2026-03-30T13:28:43Z
2026-03-30T13:28:43Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
dancinlife
44,921,882
MDQ6VXNlcjQ0OTIxODgy
User
false
huggingface/transformers
4,165,986,342
I_kwDOCUB6oc74T-gm
45,103
https://github.com/huggingface/transformers/issues/45103
https://api.github.com/repos/huggingface/transformers/issues/45103
[auto_docstring] _process_kwargs_parameters crashes with AttributeError when module uses from __future__ import annotations
### System Info # Bug: `_process_kwargs_parameters` crashes with `AttributeError` when module uses `from __future__ import annotations` ## Description `@auto_docstring` crashes at import time when applied to a class in a module that uses `from __future__ import annotations`. The decorator's `_process_kwargs_paramete...
open
reopened
false
3
[ "bug" ]
[]
2026-03-29T22:46:13Z
2026-05-07T11:04:36Z
null
NONE
null
20260507T180032Z
2026-05-07T18:00:32Z
rpathade
73,137,503
MDQ6VXNlcjczMTM3NTAz
User
false
huggingface/transformers
4,166,397,791
I_kwDOCUB6oc74Vi9f
45,106
https://github.com/huggingface/transformers/issues/45106
https://api.github.com/repos/huggingface/transformers/issues/45106
Reporting a RCE vulnerability
Hello! We are security researchers from the University of Delaware, and we are writing to follow up on a vulnerability report we submitted via [Huntr](https://huntr.com/bounties/51812e2e-2daa-4ac5-9073-209fdfd55a90). We found a critical remote code execution issue in the transformers library. Given the widespread use ...
closed
completed
false
1
[ "bug" ]
[]
2026-03-30T01:33:50Z
2026-03-30T13:55:03Z
2026-03-30T13:55:03Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
Vancir
19,147,918
MDQ6VXNlcjE5MTQ3OTE4
User
false
huggingface/transformers
4,171,422,876
I_kwDOCUB6oc74otyc
45,120
https://github.com/huggingface/transformers/issues/45120
https://api.github.com/repos/huggingface/transformers/issues/45120
Double softmax in MoE router load-balancing loss (mixtral, qwen2_moe, qwen3_vl_moe families)
## Bug description Several MoE routers apply `softmax` to raw logits inside their `forward()` method, then return the result as the first value (`router_logits`). This value is captured by `OutputRecorder` and passed to `load_balancing_loss_func`, which applies `softmax` **again** — computing the auxiliary loss on `so...
closed
completed
false
9
[]
[]
2026-03-30T14:51:09Z
2026-04-13T11:02:20Z
2026-04-13T11:02:20Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
ionut-anghelina
184,096,981
U_kgDOCvkY1Q
User
false
huggingface/transformers
4,172,229,920
I_kwDOCUB6oc74ry0g
45,125
https://github.com/huggingface/transformers/issues/45125
https://api.github.com/repos/huggingface/transformers/issues/45125
Qwen3_5MoeForConditionalGeneration missing _tp_plan for tensor parallelism
### System Info - transformers `main` branch (post-Qwen3.5 MoE addition) - Any platform with multi-GPU setup ### Who can help? @3outeille @ArthurZucker ### Information - [x] My own modified scripts ### Tasks - [x] My own task or dataset (give details below) ### Reproduction `Qwen3_5MoeForConditionalGeneration`...
closed
completed
false
0
[]
[]
2026-03-30T16:33:07Z
2026-04-02T14:10:03Z
2026-04-02T14:10:03Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
danielquintas8
72,402,095
MDQ6VXNlcjcyNDAyMDk1
User
false
huggingface/transformers
4,173,324,111
I_kwDOCUB6oc74v99P
45,127
https://github.com/huggingface/transformers/issues/45127
https://api.github.com/repos/huggingface/transformers/issues/45127
[Bug] Model collapse after merging LoRA with extended vocabulary on models with tie_word_embeddings=True (e.g., Qwen2.5 0.5B)
### System Info Name: transformers Version: 4.56.2 Python 3.11.15 Name: torch Version: 2.11.0+cu126 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [...
closed
completed
false
1
[ "bug" ]
[]
2026-03-30T19:03:20Z
2026-04-09T10:00:32Z
2026-04-09T10:00:32Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
YangNobody12
215,916,106
U_kgDODN6eSg
User
false
huggingface/transformers
4,176,767,586
I_kwDOCUB6oc749Gpi
45,137
https://github.com/huggingface/transformers/issues/45137
https://api.github.com/repos/huggingface/transformers/issues/45137
IndexError: pop from an empty deque with DeepSpeed ZeRO3
### System Info - `transformers` version: 5.5.0.dev0 - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.10.18 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.6.2 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: 0.18.8 - PyTorch version (accelerator?):...
closed
completed
false
0
[ "bug" ]
[]
2026-03-31T07:38:10Z
2026-04-13T16:38:52Z
2026-04-13T16:38:52Z
MEMBER
null
20260414T122001Z
2026-04-14T12:20:01Z
albertvillanova
8,515,462
MDQ6VXNlcjg1MTU0NjI=
User
false
huggingface/transformers
4,178,863,303
I_kwDOCUB6oc75FGTH
45,141
https://github.com/huggingface/transformers/issues/45141
https://api.github.com/repos/huggingface/transformers/issues/45141
[refactor] gpt-oss `eager_attention_forward` for modularity (Ex: models with dual eager attn: sink/no sink)
### Feature request Hi, I'm finalizing a PR for integrating MiMo-V2, but for my `MiMoV2FlashAttention` class I wanted to reuse, for modularity, both the classic (no sink) `eager_attention_forward` from LLama and the perma sink one from Arthur in gpt-oss. So that the full attention layers can benefit from SDPA/FLA2/f...
closed
completed
false
1
[ "Feature request" ]
[]
2026-03-31T12:42:42Z
2026-04-02T13:27:33Z
2026-04-02T13:27:33Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
casinca
47,400,729
MDQ6VXNlcjQ3NDAwNzI5
User
false
huggingface/transformers
4,179,801,642
I_kwDOCUB6oc75IrYq
45,145
https://github.com/huggingface/transformers/issues/45145
https://api.github.com/repos/huggingface/transformers/issues/45145
[Energy] N6 Arithmetic: 50-70% AI Training/Inference Energy Reduction — 17 Techniques with Code
## Summary **n=6 arithmetic reduces AI training and inference energy by 50-70%.** No hyperparameter search needed — all optimal values are mathematically predetermined from the unique solution to σ(n)·φ(n) = n·τ(n) ⟺ n = 6. **Full Guide**: [AI Energy Savings Guide](https://github.com/need-singularity/n6-architecture/...
closed
completed
false
0
[]
[]
2026-03-31T14:42:06Z
2026-04-02T11:09:35Z
2026-04-02T11:09:35Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
dancinlife
44,921,882
MDQ6VXNlcjQ0OTIxODgy
User
false
huggingface/transformers
4,179,820,107
I_kwDOCUB6oc75Iv5L
45,146
https://github.com/huggingface/transformers/issues/45146
https://api.github.com/repos/huggingface/transformers/issues/45146
Allow for "pure" linear attention based Qwen3.5 models
### Feature request This feature requests proposes to allow for the creation of "pure" linear attention Qwen3.5 models. Which means that every layers should be allowed to be a Gated Deltanet token mixer. Following code should therefore be "allowed": ```py import torch from transformers.models import Qwen3_5ForCaus...
closed
completed
false
4
[ "Feature request" ]
[]
2026-03-31T14:44:05Z
2026-04-02T11:17:06Z
2026-04-02T11:17:06Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
HallerPatrick
22,773,355
MDQ6VXNlcjIyNzczMzU1
User
false
huggingface/transformers
4,181,308,817
I_kwDOCUB6oc75ObWR
45,151
https://github.com/huggingface/transformers/issues/45151
https://api.github.com/repos/huggingface/transformers/issues/45151
[Energy] N6 Arithmetic: 50-70% AI Training/Inference Energy Reduction — 17 Techniques with Code
🌍 **Open-source initiative to solve the global AI energy crisis.** AI infrastructure energy consumption is doubling every year. This research provides mathematically proven techniques to cut training and inference energy by 50-70%, with no proprietary tools needed. 🔓 All code, proofs, and documentation are fully op...
closed
completed
false
1
[]
[]
2026-03-31T18:10:50Z
2026-04-02T11:13:13Z
2026-04-02T11:09:09Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
dancinlife
44,921,882
MDQ6VXNlcjQ0OTIxODgy
User
false
huggingface/transformers
4,184,682,775
I_kwDOCUB6oc75bTEX
45,161
https://github.com/huggingface/transformers/issues/45161
https://api.github.com/repos/huggingface/transformers/issues/45161
Only TP not working with GPT-OSS MoE model
### System Info python version: 3.10 transformers version: v5.1-release branch torch version: 2.9.1+cu128 ### Who can help? While running my training script to finetune gpt-oss-20b model using Tensor Parallelism, the code is breaking at this line in [moe.py](https://github.com/huggingface/transformers/blob/3fa4da70f...
open
null
false
2
[ "bug" ]
[]
2026-04-01T07:07:22Z
2026-05-04T03:42:23Z
null
NONE
null
20260504T060033Z
2026-05-04T06:00:33Z
SharvariMedhe
178,161,239
U_kgDOCp6GVw
User
false
huggingface/transformers
4,184,987,992
I_kwDOCUB6oc75cdlY
45,162
https://github.com/huggingface/transformers/issues/45162
https://api.github.com/repos/huggingface/transformers/issues/45162
Document limitations of `PreTrainedModel._can_set_*` source inspection logic
Hi! While reading the implementation of PreTrainedModel._can_set_*, I noticed that it relies on inspecting the module source file via __file__ and performing string-based checks. This seems to work in typical cases, but there are a few scenarios where it may not behave as expected: - The searched strings may appear i...
closed
completed
false
3
[]
[]
2026-04-01T08:03:59Z
2026-05-12T08:54:31Z
2026-05-12T08:54:31Z
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
ElgoogUdiab
28,426,666
MDQ6VXNlcjI4NDI2NjY2
User
false
huggingface/transformers
4,190,662,664
I_kwDOCUB6oc75yHAI
45,175
https://github.com/huggingface/transformers/issues/45175
https://api.github.com/repos/huggingface/transformers/issues/45175
Feature request: Add EfficientViT-SAM (efficientvitsam) to Transformers
### Feature request [EfficientViT-SAM](https://github.com/mit-han-lab/efficientvit) combines MIT’s EfficientViT encoder with the SAM-style prompt encoder and mask decoder. It offers a lighter, faster alternative to ViT-based SAM for interactive segmentation while staying close to the same prompting and mask-decoding w...
open
null
false
0
[ "Feature request" ]
[]
2026-04-02T00:31:07Z
2026-04-02T00:31:07Z
null
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
masoudpz
4,822,765
MDQ6VXNlcjQ4MjI3NjU=
User
false
huggingface/transformers
4,191,057,386
I_kwDOCUB6oc75znXq
45,177
https://github.com/huggingface/transformers/issues/45177
https://api.github.com/repos/huggingface/transformers/issues/45177
add DeepSeek-OCR2
### Model Description [DeepSeek-OCR-2](https://huggingface.co/papers/2601.20552) is an OCR-specialized vision-language model proposed by the DeepSeek team. The model uses a distinctive architecture: - **Vision encoder**: SAM ViT-B - **Hybrid attention encoder**: Qwen2-based, applying bidirectional attention over i...
closed
completed
false
1
[ "New model" ]
[]
2026-04-02T02:43:49Z
2026-04-30T14:13:41Z
2026-04-30T14:13:41Z
CONTRIBUTOR
null
20260501T113108Z
2026-05-01T11:31:08Z
thisisiron
23,303,033
MDQ6VXNlcjIzMzAzMDMz
User
false
huggingface/transformers
4,192,752,849
I_kwDOCUB6oc756FTR
45,182
https://github.com/huggingface/transformers/issues/45182
https://api.github.com/repos/huggingface/transformers/issues/45182
🔒 Track: Pin GitHub Actions to commit SHAs
## Tracking issue — Pin GitHub Actions to commit SHAs This issue tracks the migration of all GitHub Actions workflow files to use pinned commit SHAs instead of mutable tags or branch names (e.g. \`v4\`, \`main\`). **Why?** Pinning to a SHA prevents supply chain attacks where a tag could be silently moved to point to ...
closed
completed
false
0
[ "actions-pin-sha" ]
[]
2026-04-02T08:19:53Z
2026-04-02T08:23:31Z
2026-04-02T08:23:31Z
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
paulinebm
155,966,238
U_kgDOCUvbHg
User
false
huggingface/transformers
4,193,166,790
I_kwDOCUB6oc757qXG
45,183
https://github.com/huggingface/transformers/issues/45183
https://api.github.com/repos/huggingface/transformers/issues/45183
[Bug] XOR logic for `input_ids`/`inputs_embeds` validation produces wrong or misleading error messages across multiple models
### System Info ## Summary Multiple models across the library use an XOR (`^`) condition to validate `input_ids` and `inputs_embeds` inputs. This pattern has two distinct issues depending on the model: 1. **Severe (11 models)**: XOR is paired with the error message `"You cannot specify both ..."`, which is fac...
closed
completed
false
0
[ "bug", "Code agent slop" ]
[]
2026-04-02T09:17:32Z
2026-04-02T11:56:39Z
2026-04-02T11:56:33Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
XanxusCrypto
30,135,657
MDQ6VXNlcjMwMTM1NjU3
User
false
huggingface/transformers
4,196,681,150
I_kwDOCUB6oc76JEW-
45,198
https://github.com/huggingface/transformers/issues/45198
https://api.github.com/repos/huggingface/transformers/issues/45198
[BUG] Wav2Vec2 wav2vec2-lv-60-espeak-cv-ft: save_pretrained and tokenization fail
### System Info * `transformers` version: `5.5.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.8.0` * `safetensors` version: `0.7.0` * `accelerate` version: `1.13.0` * Accelerate config: `not found` * DeepSpeed version: `no...
closed
completed
false
0
[ "bug" ]
[]
2026-04-02T19:58:10Z
2026-04-18T09:05:31Z
2026-04-14T13:59:04Z
CONTRIBUTOR
null
20260418T090534Z
2026-04-18T09:05:34Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
4,196,814,649
I_kwDOCUB6oc76Jk85
45,200
https://github.com/huggingface/transformers/issues/45200
https://api.github.com/repos/huggingface/transformers/issues/45200
[Gemma 4] mm_token_type_ids required for text-only fine-tuning - should default to zeros
### System Info transformers: 5.5.0.dev0 (installed from source) torch: 2.8.0+cu128 trl: 1.0.0 peft: 0.18.2.dev0 Python: 3.12 OS: Linux (RunPod, Ubuntu 24.04) GPU: NVIDIA B200 (192GB) ### Who can help? @zucchini-nlp @ArthurZucker ### Information - [ ] The official example scripts - [x] My own modified scripts ...
closed
completed
false
5
[ "bug" ]
[]
2026-04-02T20:26:24Z
2026-04-22T10:44:25Z
2026-04-22T10:44:25Z
NONE
null
20260422T120052Z
2026-04-22T12:00:52Z
dentity007
184,003,273
U_kgDOCveqyQ
User
false
huggingface/transformers
4,197,078,693
I_kwDOCUB6oc76Klal
45,201
https://github.com/huggingface/transformers/issues/45201
https://api.github.com/repos/huggingface/transformers/issues/45201
[Gemma 4] Support per-layer FlashAttention: FA2 for sliding layers, SDPA for global layers
## Problem Gemma 4 (26B-A4B) has a **hybrid attention architecture** where different layers use different head dimensions: - **26 out of 30 layers** use **sliding window attention** with `head_dim=256` — fully compatible with FlashAttention 2 - **4 out of 30 layers** use **global attention** with `global_head_dim=512...
closed
completed
false
4
[]
[]
2026-04-02T21:30:14Z
2026-05-12T08:54:28Z
2026-05-12T08:54:28Z
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
samuelazran
2,499,928
MDQ6VXNlcjI0OTk5Mjg=
User
false
huggingface/transformers
4,197,807,024
I_kwDOCUB6oc76NXOw
45,203
https://github.com/huggingface/transformers/issues/45203
https://api.github.com/repos/huggingface/transformers/issues/45203
Add PolarQuant quantization: Hadamard-rotated Lloyd-Max optimal weights + KV cache
## 🚀 Feature request ### Motivation PolarQuant is a quantization method that uses **Walsh-Hadamard rotation + Lloyd-Max optimal centroids** for both weight compression and KV cache compression. It achieves better PPL per bit than existing methods because: 1. **Hadamard rotation** decorrelates weight/activation valu...
open
null
false
19
[]
[]
2026-04-03T01:52:14Z
2026-05-05T23:36:12Z
null
NONE
null
20260506T000041Z
2026-05-06T00:00:41Z
caiovicentino
193,428,813
U_kgDOC4d9TQ
User
false
huggingface/transformers
4,198,343,838
I_kwDOCUB6oc76PaSe
45,205
https://github.com/huggingface/transformers/issues/45205
https://api.github.com/repos/huggingface/transformers/issues/45205
Gemma4: chat_template missing from tokenizer_config.json, requires manual loading from separate file
## Description The Gemma4 models (e.g. `google/gemma-4-E2B-it`) don't include `chat_template` in `tokenizer_config.json`. The chat template is shipped as a separate `chat_template.jinja` file instead. This means the standard `tokenizer.apply_chat_template()` workflow fails out of the box: ```python from transformers...
open
null
false
7
[]
[]
2026-04-03T04:55:51Z
2026-05-07T15:42:25Z
null
CONTRIBUTOR
null
20260507T180032Z
2026-05-07T18:00:32Z
w4nderlust
349,256
MDQ6VXNlcjM0OTI1Ng==
User
false
huggingface/transformers
4,198,345,190
I_kwDOCUB6oc76Panm
45,206
https://github.com/huggingface/transformers/issues/45206
https://api.github.com/repos/huggingface/transformers/issues/45206
Gemma4: PLE (Per-Layer Embeddings) implementation is underdocumented and config is misleading
## Description I was implementing Gemma4 inference from scratch (in Rust) and the Per-Layer Embeddings (PLE) system was by far the hardest part to get right. The config fields are misleading, the embedding type is non-obvious, and the full pipeline involves several undocumented steps. Sharing this in case it helps oth...
closed
completed
false
0
[]
[]
2026-04-03T04:56:17Z
2026-04-14T17:01:13Z
2026-04-14T17:01:13Z
CONTRIBUTOR
null
20260414T200457Z
2026-04-14T20:04:57Z
w4nderlust
349,256
MDQ6VXNlcjM0OTI1Ng==
User
false
huggingface/transformers
4,198,441,837
I_kwDOCUB6oc76PyNt
45,208
https://github.com/huggingface/transformers/issues/45208
https://api.github.com/repos/huggingface/transformers/issues/45208
[Qwen3MoE] Potentially a bug on `Qwen3MoeSparseMoeBlock`
Hi, I found a typing mismatch on [`Qwen3MoeSparseMoeBlock`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen3_moe/modeling_qwen3_moe.py#L275): ```python class Qwen3MoeSparseMoeBlock(nn.Module): def __init__(self, config: Qwen3MoeConfig): super().__init__() self.ex...
closed
completed
false
2
[]
[]
2026-04-03T05:26:11Z
2026-04-19T13:41:02Z
2026-04-13T14:07:32Z
NONE
null
20260419T140535Z
2026-04-19T14:05:35Z
KbKuuhaku
30,406,457
MDQ6VXNlcjMwNDA2NDU3
User
false
huggingface/transformers
4,199,586,186
I_kwDOCUB6oc76UJmK
45,216
https://github.com/huggingface/transformers/issues/45216
https://api.github.com/repos/huggingface/transformers/issues/45216
[Regression] Qwen3.5 saved checkpoint is not correct with `save_pretrained` API since version 5.4.0
### System Info transformers == 5.3.0 works well transformers ==5.4.0 returns `Unexpected model.language_model.language_model.language_model.layers.7.self_attn.v_proj.weight in loaded safetensors file` ### Who can help? @zucchini-nlp ### Information - [ ] The official example scripts - [x] My own modified scripts ...
closed
completed
false
1
[ "bug" ]
[]
2026-04-03T09:42:19Z
2026-04-09T13:17:51Z
2026-04-09T13:17:51Z
CONTRIBUTOR
null
20260411T144729Z
2026-04-11T14:47:29Z
xin3he
83,260,933
MDQ6VXNlcjgzMjYwOTMz
User
false
huggingface/transformers
4,202,205,162
I_kwDOCUB6oc76eI_q
45,229
https://github.com/huggingface/transformers/issues/45229
https://api.github.com/repos/huggingface/transformers/issues/45229
Gemma4 31B-IT Multi-GPU inference CUDA OOM
### System Info ### Description I updated my transformers module to 5.5.0 from 4.53.0 to try `google/gemma-4-31B-it` model. I was using `meta-llama/Llama-3.3-70B-Instruct` for the same set of prompts. The Llama model is able to process the prompt without any problems despite occupying more VRAM than Gemma4. Gemma4 on ...
closed
completed
false
21
[ "bug" ]
[]
2026-04-03T21:07:41Z
2026-04-23T08:18:03Z
2026-04-23T08:18:03Z
NONE
null
20260423T120024Z
2026-04-23T12:00:24Z
vaibhavBh-0
94,278,954
U_kgDOBZ6VKg
User
false
huggingface/transformers
4,202,212,283
I_kwDOCUB6oc76eKu7
45,230
https://github.com/huggingface/transformers/issues/45230
https://api.github.com/repos/huggingface/transformers/issues/45230
Bug report
### System Info Ch ### Who can help? _No response_ ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] #45231 ### Reproduction Vvvbbbbbx ### Expected behavior <img width="495" he...
closed
completed
false
1
[ "bug" ]
[]
2026-04-03T21:10:02Z
2026-04-03T21:13:54Z
2026-04-03T21:13:35Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
kerrrang9214-tech
242,015,866
U_kgDODmzeeg
User
false
huggingface/transformers
4,202,223,249
I_kwDOCUB6oc76eNaR
45,231
https://github.com/huggingface/transformers/issues/45231
https://api.github.com/repos/huggingface/transformers/issues/45231
My own task or dataset (give details below)
null
closed
completed
false
0
[]
[]
2026-04-03T21:13:54Z
2026-04-08T13:07:14Z
2026-04-08T13:07:14Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
kerrrang9214-tech
242,015,866
U_kgDODmzeeg
User
false
huggingface/transformers
4,203,993,670
I_kwDOCUB6oc76k9pG
45,237
https://github.com/huggingface/transformers/issues/45237
https://api.github.com/repos/huggingface/transformers/issues/45237
GPT-OSS-20B not work in AMD GPUs
### System Info GPT-OSS-20B does not work on Radeon GPUs. I tested it in both the native environment and the Docker container rocm/pytorch:rocm7.2.1_ubuntu24.04_py3.12_pytorch_release_2.9.1. I tried updating Triton, but it still didn't work. I tried those versions of Triton, triton-rocm 3.6.0, 3.5.1+rocm (included in ...
open
null
false
4
[ "bug" ]
[]
2026-04-04T07:20:50Z
2026-05-01T00:51:47Z
null
CONTRIBUTOR
null
20260501T113108Z
2026-05-01T11:31:08Z
tanreinama
51,933,889
MDQ6VXNlcjUxOTMzODg5
User
false
huggingface/transformers
4,204,346,376
I_kwDOCUB6oc76mTwI
45,239
https://github.com/huggingface/transformers/issues/45239
https://api.github.com/repos/huggingface/transformers/issues/45239
🚨 QA Observer Agent: Real-Time Architecture & Security Pattern Watcher (SCAFFOLD-WATCH)
### Feature request Proposing SCAFFOLD-WATCH — an observer agent to proactively surface architectural drift, security vulnerabilities (e.g. credential leaks, unparameterized SQL, agent drift) and redundant/repetitive developer work in real-time across PRs and developer sessions. Systems like Transformers are highly ...
closed
completed
false
1
[]
[]
2026-04-04T09:14:13Z
2026-04-08T13:12:28Z
2026-04-08T13:12:28Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
Insider77Circle
160,362,522
U_kgDOCY7wGg
User
false
huggingface/transformers
4,205,336,949
I_kwDOCUB6oc76qFl1
45,242
https://github.com/huggingface/transformers/issues/45242
https://api.github.com/repos/huggingface/transformers/issues/45242
[Gemma 4] `use_cache=False` corrupts attention computation, producing garbage logits
Gemma 4 has a bug where `use_cache=False` corrupts the attention computation, producing garbage logits. Every QLoRA tutorial sets `model.config.use_cache = False`, but this breaks Gemma 4 specifically. When fine-tuning Gemma 4 (E2B-it in this situation) using standard QLoRA/LoRA workflows, the model produces garbage l...
closed
completed
false
6
[]
[]
2026-04-04T16:48:25Z
2026-04-10T12:07:11Z
2026-04-09T08:17:52Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
siwoolol
99,875,626
U_kgDOBfP7Kg
User
false
huggingface/transformers
4,206,107,487
I_kwDOCUB6oc76tBtf
45,245
https://github.com/huggingface/transformers/issues/45245
https://api.github.com/repos/huggingface/transformers/issues/45245
RuntimeError: number of categories cannot exceed 2^24
### System Info ### System Info - `transformers` version: 5.2.0 - Platform: Linux (Ubuntu) / CUDA ### Information - [x] The official example scripts - [x] My own modified scripts ### Bug description When using `model.generate()` with `do_sample=True` and a large `num_beams` (e.g., 128) on a model with a large vocab...
closed
completed
false
3
[ "bug" ]
[]
2026-04-05T00:42:05Z
2026-05-08T11:25:13Z
2026-05-08T11:25:13Z
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
Hzzone
19,267,349
MDQ6VXNlcjE5MjY3MzQ5
User
false
huggingface/transformers
4,206,312,958
I_kwDOCUB6oc76tz3-
45,246
https://github.com/huggingface/transformers/issues/45246
https://api.github.com/repos/huggingface/transformers/issues/45246
[Security/Feature] Deterministic substrate made modeling_utils.py stateful without modifying source — CJPI 100 · CVE-2025-32434 wrapped
Hi HuggingFace team, I’m a solo developer. I built a deterministic, non-AI software evolution engine called CMPSBL® and I ran it on modeling_utils.py — your foundational training model utility layer. I want to be direct about what happened: The substrate wrapped the file in 217 seconds. It did not modify a single li...
closed
completed
false
15
[]
[]
2026-04-05T03:06:11Z
2026-04-11T16:40:17Z
2026-04-08T13:14:48Z
NONE
null
20260413T085906Z
2026-04-13T08:59:06Z
SweetKenneth
125,772,314
U_kgDOB38iGg
User
false
huggingface/transformers
4,207,250,831
I_kwDOCUB6oc76xY2P
45,250
https://github.com/huggingface/transformers/issues/45250
https://api.github.com/repos/huggingface/transformers/issues/45250
Flash Attention 2.0
: NemotronHForCausalLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co//kaggle/input/models/metric/nemotron-3-nano-30b-a3b-bf16/transformers/default/1/discussions/new or in the Transformers GitHub repo: ...
closed
completed
false
1
[]
[]
2026-04-05T11:04:24Z
2026-04-07T10:43:33Z
2026-04-07T10:43:33Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
nicholasdudek
196,255,629
U_kgDOC7KfjQ
User
false
huggingface/transformers
4,209,709,972
I_kwDOCUB6oc766xOU
45,259
https://github.com/huggingface/transformers/issues/45259
https://api.github.com/repos/huggingface/transformers/issues/45259
How can I use Gemma4's variable image resolution feature?
Hi, I'd like to adjust the vision token budget. It appears that `default_output_length` in vision_config controls the length of the vision output tokens, but modifying it in `config.json` or loading the processor with `default_output_length=560` don't work. Could you advise how to change the number of vision outputs?
closed
completed
false
1
[]
[]
2026-04-06T05:14:28Z
2026-04-14T10:52:32Z
2026-04-14T10:52:32Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
ejlee95
36,064,879
MDQ6VXNlcjM2MDY0ODc5
User
false
huggingface/transformers
4,212,778,412
I_kwDOCUB6oc77GeWs
45,265
https://github.com/huggingface/transformers/issues/45265
https://api.github.com/repos/huggingface/transformers/issues/45265
More permissive config parsing and validation
### Feature request Make more permissive `config.json`/`params.json` parsing / validation: cast int constants as float without warnings ### Motivation E.g. when loading Leanstral (cf https://huggingface.co/mistralai/Leanstral-2603/discussions/7#69cfde05abe040f5323c6390): ``` Unrecognized keys in `rope_parameters` f...
closed
completed
false
9
[ "Feature request" ]
[]
2026-04-06T16:28:50Z
2026-04-13T13:14:41Z
2026-04-13T13:14:41Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
vadimkantorov
1,041,752
MDQ6VXNlcjEwNDE3NTI=
User
false
huggingface/transformers
4,215,133,968
I_kwDOCUB6oc77PdcQ
45,276
https://github.com/huggingface/transformers/issues/45276
https://api.github.com/repos/huggingface/transformers/issues/45276
[gemma4] resize_token_embeddings does not effect to embed_tokens_per_layer or output_embeddings
### System Info - `transformers` version: 5.5.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2....
closed
completed
false
8
[ "bug" ]
[]
2026-04-07T02:55:03Z
2026-04-15T11:15:23Z
2026-04-15T11:15:23Z
CONTRIBUTOR
null
20260415T224019Z
2026-04-15T22:40:19Z
KoichiYasuoka
15,098,598
MDQ6VXNlcjE1MDk4NTk4
User
false
huggingface/transformers
4,215,644,353
I_kwDOCUB6oc77RaDB
45,278
https://github.com/huggingface/transformers/issues/45278
https://api.github.com/repos/huggingface/transformers/issues/45278
Many import errors after update from 4.57.0 to 5.5.0
### System Info After updating Transformers from version 4.57.0 to version 5.5.0, I get the following import errors: ImportError: cannot import name 'HybridCache' from 'transformers' ImportError: cannot import name 'AutoModelForVision2Seq' from 'transformers' ImportError: cannot import name 'PretrainedConfig' from 't...
closed
completed
false
3
[ "bug" ]
[]
2026-04-07T05:28:47Z
2026-05-15T08:58:19Z
2026-05-15T08:58:19Z
NONE
null
20260515T120027Z
2026-05-15T12:00:27Z
marchcat69
67,062,782
MDQ6VXNlcjY3MDYyNzgy
User
false
huggingface/transformers
4,217,995,415
I_kwDOCUB6oc77aYCX
45,290
https://github.com/huggingface/transformers/issues/45290
https://api.github.com/repos/huggingface/transformers/issues/45290
`apply_chat_template(tokenize=True)` crashes on assistant messages with tool calls and no content
### System Info Transformers 5.5..0 ### Who can help? @zucchini-nlp @Rocketknight1 ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ##...
closed
completed
false
2
[ "bug" ]
[]
2026-04-07T13:29:49Z
2026-04-13T19:02:32Z
2026-04-13T19:02:32Z
MEMBER
null
20260414T122001Z
2026-04-14T12:20:01Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
4,218,427,722
I_kwDOCUB6oc77cBlK
45,292
https://github.com/huggingface/transformers/issues/45292
https://api.github.com/repos/huggingface/transformers/issues/45292
resize_token_embeddings does not effect to output_embeddings
### System Info - `transformers` version: 5.5.0 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.8.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
3
[ "bug" ]
[]
2026-04-07T14:36:45Z
2026-04-25T09:16:04Z
2026-04-25T09:16:04Z
CONTRIBUTOR
null
20260425T120019Z
2026-04-25T12:00:19Z
KoichiYasuoka
15,098,598
MDQ6VXNlcjE1MDk4NTk4
User
false
huggingface/transformers
4,219,591,188
I_kwDOCUB6oc77gdoU
45,295
https://github.com/huggingface/transformers/issues/45295
https://api.github.com/repos/huggingface/transformers/issues/45295
Support Sequence Classification for Gemma 4 Models
### Feature request Add Gemma4ForSequenceClassification ### Motivation Without this class, fine-tuning Gemma 4 on classification tasks requires manually adding a classification head, losing compatibility with AutoModelForSequenceClassification, Trainer, and the standard pipeline workflow. ### Your contribution #45...
closed
completed
false
2
[ "Feature request" ]
[]
2026-04-07T17:52:26Z
2026-04-15T10:43:34Z
2026-04-15T10:43:34Z
NONE
null
20260415T224019Z
2026-04-15T22:40:19Z
jesperschlegel
17,431,950
MDQ6VXNlcjE3NDMxOTUw
User
false
huggingface/transformers
4,221,012,848
I_kwDOCUB6oc77l4tw
45,304
https://github.com/huggingface/transformers/issues/45304
https://api.github.com/repos/huggingface/transformers/issues/45304
NexusQuant: training-free KV cache compression (10-33x) via DynamicCache hooks
Hi, Sharing a training-free KV cache compression approach we've been developing that hooks into DynamicCache. Might be useful for folks running into memory limits with long contexts. **NexusQuant** compresses the KV cache by 10-33x by combining attention-based token eviction with E8 lattice vector quantization. It mo...
open
null
false
2
[]
[]
2026-04-07T22:50:31Z
2026-05-08T08:27:55Z
null
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
jagmarques
32,335,502
MDQ6VXNlcjMyMzM1NTAy
User
false
huggingface/transformers
4,221,154,706
I_kwDOCUB6oc77mbWS
45,305
https://github.com/huggingface/transformers/issues/45305
https://api.github.com/repos/huggingface/transformers/issues/45305
Gradients not averaged by GAS when using DeepSpeed + model_accepts_loss_kwargs=True (Qwen3, Llama3, etc.)
### System Info - `transformers` version: 5.3.0 - Platform: Linux-5.14.0-427.33.1.el9_4.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.15 - Huggingface_hub version: 1.6.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_ty...
closed
completed
false
6
[ "bug" ]
[]
2026-04-07T23:31:43Z
2026-04-13T14:32:25Z
2026-04-13T14:32:25Z
CONTRIBUTOR
null
20260414T122001Z
2026-04-14T12:20:01Z
florian6973
70,778,912
MDQ6VXNlcjcwNzc4OTEy
User
false
huggingface/transformers
4,221,321,450
I_kwDOCUB6oc77nEDq
45,306
https://github.com/huggingface/transformers/issues/45306
https://api.github.com/repos/huggingface/transformers/issues/45306
Remove import * usage
Sorry but I want to revisit a closed issue #41669 here. That issue was automatically closed at the time without further responses. I hope we can reconsider the validity of that issue. As I mentioned in that issue, I believe this is a serious enough problem. Actually, we're on the same page: we all want IDE autocomplet...
open
null
false
2
[]
[]
2026-04-08T00:22:53Z
2026-05-08T08:27:53Z
null
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
yueyinqiu
18,749,772
MDQ6VXNlcjE4NzQ5Nzcy
User
false
huggingface/transformers
4,222,028,568
I_kwDOCUB6oc77pwsY
45,307
https://github.com/huggingface/transformers/issues/45307
https://api.github.com/repos/huggingface/transformers/issues/45307
Bug, Generation
## Title `AssistantToTargetTranslator` crashes with `AttributeError: 'map_input_embeddings'` when using assisted generation with cross-vocab models ## Description When using assisted generation (`model.generate(assistant_model=...)`) with models that have different vocabulary sizes but share the same tokenizer famil...
closed
completed
false
3
[]
[]
2026-04-08T04:07:37Z
2026-04-10T09:30:25Z
2026-04-10T09:30:25Z
CONTRIBUTOR
null
20260411T144729Z
2026-04-11T14:47:29Z
Regata3010
105,749,532
U_kgDOBk2cHA
User
false
huggingface/transformers
4,222,532,106
I_kwDOCUB6oc77rroK
45,308
https://github.com/huggingface/transformers/issues/45308
https://api.github.com/repos/huggingface/transformers/issues/45308
Feature request: Support evaluation every N epochs in TrainingArguments
### Feature request Currently, Trainer supports evaluation strategies: - "epoch": evaluate every epoch - "steps": evaluate every N steps However, there is no built-in way to evaluate every N epochs (e.g., every 5 epochs). This is particularly useful when: - evaluation is computationally expensive - running large-sca...
open
null
false
3
[ "Feature request" ]
[]
2026-04-08T06:22:20Z
2026-05-01T16:29:18Z
null
NONE
null
20260501T180051Z
2026-05-01T18:00:51Z
varuna-km-18267
201,051,200
U_kgDOC_vMQA
User
false
huggingface/transformers
4,223,503,781
I_kwDOCUB6oc77vY2l
45,310
https://github.com/huggingface/transformers/issues/45310
https://api.github.com/repos/huggingface/transformers/issues/45310
[BUG] transformers>=5.4.0, Qwen3.5 Moe from_pretrained error
### System Info ``` import os os.environ['CUDA_VISIBLE_DEVICS'] = '0' from transformers import Qwen3_5ForConditionalGeneration, AutoTokenizer model = Qwen3_5ForConditionalGeneration.from_pretrained('Qwen/Qwen3.5-35B-A3B') model.save_pretrained('/root/Qwen3.5-35B-A3B', max_shard_size='10GB') model = Qwen3_5ForCond...
closed
completed
false
2
[ "bug" ]
[]
2026-04-08T09:29:48Z
2026-04-09T13:26:04Z
2026-04-09T13:26:04Z
CONTRIBUTOR
null
20260411T144729Z
2026-04-11T14:47:29Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,224,244,253
I_kwDOCUB6oc77yNod
45,313
https://github.com/huggingface/transformers/issues/45313
https://api.github.com/repos/huggingface/transformers/issues/45313
Qwen3.5: DeepSpeed ZeRO-3 fails to load weights for language_model
## System Info - `transformers` version: 5.4.0 - Platform: Linux (H200 x4) - Python version: 3.12.0 - DeepSpeed version: 0.18.5 - PyTorch version: 2.8.0+cu128 (CUDA) ## Problem When loading Qwen/Qwen3.5-27B (also tested with 9B) with DeepSpeed ZeRO-3, `language_model` parameters are reported as MISSING in the load rep...
open
null
false
9
[ "Should Fix" ]
[]
2026-04-08T11:44:10Z
2026-05-13T02:44:22Z
null
NONE
null
20260513T060025Z
2026-05-13T06:00:25Z
debOliveira
48,807,586
MDQ6VXNlcjQ4ODA3NTg2
User
false
huggingface/transformers
4,225,839,225
I_kwDOCUB6oc774TB5
45,322
https://github.com/huggingface/transformers/issues/45322
https://api.github.com/repos/huggingface/transformers/issues/45322
[Model Request] Add EUPE (Efficient Universal Perception Encoder) by Meta AI
### Model description I would like to request adding support for EUPE (Efficient Universal Perception Encoder), a recent vision backbone released by Meta AI. Paper: https://arxiv.org/abs/2603.22387 Code: https://github.com/facebookresearch/eupe EUPE is a multi-purpose vision encoder trained via distillation from mul...
open
null
false
2
[ "New model" ]
[]
2026-04-08T15:52:40Z
2026-04-28T02:29:11Z
null
NONE
null
20260428T060015Z
2026-04-28T06:00:15Z
forensicmike
44,839,768
MDQ6VXNlcjQ0ODM5NzY4
User
false
huggingface/transformers
4,226,684,674
I_kwDOCUB6oc777hcC
45,325
https://github.com/huggingface/transformers/issues/45325
https://api.github.com/repos/huggingface/transformers/issues/45325
Qwen2.5-VL get_rope_index scales still-image temporal position_ids by tokens_per_second in transformers 5.3.0
### System Info - `transformers` version: 5.3.0 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.9.0 - Safetensors version: 0.6.2 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerato...
closed
completed
false
1
[ "bug" ]
[]
2026-04-08T18:20:48Z
2026-04-10T09:17:44Z
2026-04-10T09:17:44Z
NONE
null
20260411T144729Z
2026-04-11T14:47:29Z
ayaan-fw
197,808,338
U_kgDOC8pQ0g
User
false
huggingface/transformers
4,229,234,189
I_kwDOCUB6oc78FP4N
45,331
https://github.com/huggingface/transformers/issues/45331
https://api.github.com/repos/huggingface/transformers/issues/45331
[Gemma4] Bug: audio token missing newline separators in chat_template.jinja causes multimodal failure when image precedes audio
## Bug Description In `chat_template.jinja` for Gemma4, the image token has `\n\n` separators but the audio token does not: ```jinja {%- elif item['type'] == 'image' -%} {{- '\n\n<|image|>\n\n' -}} ← has \n\n {%- elif item['type'] == 'audio' -%} {{- '<|audio|>' -}} ← missing \n\n This causes the ...
closed
completed
false
2
[]
[]
2026-04-09T03:35:46Z
2026-05-11T14:32:04Z
2026-05-11T14:32:04Z
NONE
null
20260511T180035Z
2026-05-11T18:00:35Z
LfWhat
103,490,056
U_kgDOBisiCA
User
false
huggingface/transformers
4,230,606,726
I_kwDOCUB6oc78Ke-G
45,335
https://github.com/huggingface/transformers/issues/45335
https://api.github.com/repos/huggingface/transformers/issues/45335
[t5gemma] resize_token_embeddings does not effect to decoder.embed_tokens
### System Info - `transformers` version: 5.5.1 - Platform: Linux-6.6.113+-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.9.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.10...
closed
completed
false
4
[ "bug" ]
[]
2026-04-09T08:11:47Z
2026-04-16T16:02:08Z
2026-04-16T16:02:08Z
CONTRIBUTOR
null
20260416T222012Z
2026-04-16T22:20:12Z
KoichiYasuoka
15,098,598
MDQ6VXNlcjE1MDk4NTk4
User
false
huggingface/transformers
4,232,449,575
I_kwDOCUB6oc78Rg4n
45,341
https://github.com/huggingface/transformers/issues/45341
https://api.github.com/repos/huggingface/transformers/issues/45341
A little bug in testing_utils.py
### System Info . ### Who can help? I was trying to run integration tests on cpu for a new model addition PR then got hit by an error due to line 3220 https://github.com/huggingface/transformers/blob/2fae57f5da9b0108d0c4da2692f5a702e6fb8c02/src/transformers/testing_utils.py#L3215-L3235 The reason is that I have CU...
closed
completed
false
1
[ "bug" ]
[]
2026-04-09T13:05:35Z
2026-05-11T03:13:27Z
2026-05-11T03:13:27Z
CONTRIBUTOR
null
20260511T060028Z
2026-05-11T06:00:28Z
MHRDYN7
113,298,714
U_kgDOBsDNGg
User
false
huggingface/transformers
4,238,183,287
I_kwDOCUB6oc78nYt3
45,356
https://github.com/huggingface/transformers/issues/45356
https://api.github.com/repos/huggingface/transformers/issues/45356
Regression in Kimi-K2.5 tokenizer from 5.3.0 to 5.4.0: incorrect codec handling and misleading fix_mistral_regex warning
### System Info - OS: Linux - Python: 3.10.12 - Model/tokenizer: `moonshotai/Kimi-K2.5` - `trust_remote_code=True` ### Who can help? @ArthurZucker and @itazap ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (s...
closed
completed
false
3
[ "bug" ]
[]
2026-04-10T09:42:26Z
2026-04-13T16:43:05Z
2026-04-13T15:16:25Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
Lander-Hatsune
36,358,465
MDQ6VXNlcjM2MzU4NDY1
User
false
huggingface/transformers
4,238,326,785
I_kwDOCUB6oc78n7wB
45,357
https://github.com/huggingface/transformers/issues/45357
https://api.github.com/repos/huggingface/transformers/issues/45357
[Regression] Qwen3.5 `save_pretrained` still saves incorrect visual encoder keys in 5.5.3
### System Info - `transformers` version: 5.5.0, 5.5.3 - Platform: Linux (NVIDIA A100 80GB × 8) - Python version: 3.12 - PyTorch version: 2.9.1+cu128 - CUDA version: 12.8 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially...
closed
completed
false
3
[ "bug" ]
[]
2026-04-10T10:01:58Z
2026-05-12T07:45:12Z
2026-04-10T15:41:46Z
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
johnking0099
1,121,447
MDQ6VXNlcjExMjE0NDc=
User
false
huggingface/transformers
4,240,172,193
I_kwDOCUB6oc78u-Sh
45,362
https://github.com/huggingface/transformers/issues/45362
https://api.github.com/repos/huggingface/transformers/issues/45362
Qwen3.5-35B crashes with transformers chat
### System Info Using "transformers chat" with Qwen3.5-35B the moment a prompt is sent the server errors with an AttributeError ``` [snip] File "/mnt/venvs/rocm6.4/lib64/python3.12/site-packages/transformers/cli/serving/chat_completion.py", line 174, in _streaming queue, streamer = gen_manager.generate_streaming(...
closed
completed
false
1
[ "bug" ]
[]
2026-04-10T15:39:40Z
2026-04-13T15:01:54Z
2026-04-13T15:01:54Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
Sector14
7,682,567
MDQ6VXNlcjc2ODI1Njc=
User
false
huggingface/transformers
4,243,389,991
I_kwDOCUB6oc787P4n
45,372
https://github.com/huggingface/transformers/issues/45372
https://api.github.com/repos/huggingface/transformers/issues/45372
ImportError: cannot import name 'ReasoningEffort' from mistral_common breaks Gemma 4 processor loading in transformers 5.5.x / 5.6.0.dev0
### System Info Environment - transformers: 5.5.1 and 5.6.0.dev0 (git main as of ~2026-04-08) - mistral-common: 1.9.1 - Python: 3.11.15 - Platform: macOS aarch64 (Apple Silicon) - mlx-vlm: 0.4.x Bug description When loading any model via AutoProcessor (tested with google/gemma-4-31b-it and mlx-community/Qwen3.5-2B-4b...
open
null
false
4
[ "bug" ]
[]
2026-04-11T06:34:13Z
2026-05-15T10:31:54Z
null
NONE
null
20260515T120027Z
2026-05-15T12:00:27Z
Glademist
40,288,837
MDQ6VXNlcjQwMjg4ODM3
User
false
huggingface/transformers
4,243,403,001
I_kwDOCUB6oc787TD5
45,373
https://github.com/huggingface/transformers/issues/45373
https://api.github.com/repos/huggingface/transformers/issues/45373
Add Gemma4ForSequenceClassification (missing from gemma4 module — Gemma 2/3 have it)
### Feature request <p style="white-space: pre-wrap; margin-top: 0.1em; margin-bottom: 0.2em; color: rgb(204, 204, 204); font-family: -apple-system, &quot;system-ui&quot;, &quot;Segoe UI&quot;, Roboto, sans-serif; font-size: 13px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weig...
open
null
false
6
[ "Feature request" ]
[]
2026-04-11T06:41:23Z
2026-05-06T14:29:17Z
null
NONE
null
20260506T180045Z
2026-05-06T18:00:45Z
LarsKlawitter
46,841,030
MDQ6VXNlcjQ2ODQxMDMw
User
false
huggingface/transformers
4,244,561,953
I_kwDOCUB6oc78_uAh
45,375
https://github.com/huggingface/transformers/issues/45375
https://api.github.com/repos/huggingface/transformers/issues/45375
Qwen3_5MoeVisionConfig missing deepstack_visual_indexes field — silently dropped by @strict
### System Info Transformers v5.5.0 The @strict decorator on Qwen3_5MoeVisionConfig (added in #41250) silently drops the deepstack_visual_indexes field during config loading, because it's not declared as a class attribute. However, every Qwen3.5 MoE model on HuggingFace ships with this field in its config.json. ...
open
null
false
2
[ "bug" ]
[]
2026-04-11T12:43:25Z
2026-05-12T08:54:20Z
null
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
sharonyu-115
192,668,651
U_kgDOC3vj6w
User
false
huggingface/transformers
4,244,594,758
I_kwDOCUB6oc78_2BG
45,376
https://github.com/huggingface/transformers/issues/45376
https://api.github.com/repos/huggingface/transformers/issues/45376
[Bug] Loading gemma4 on transformers v4 raises misleading AttributeError: 'list' object has no attribute 'keys' instead of a clear version requirement error
## System info ``` - transformers version: 4.x (any release before v5.5.0) - Platform: macOS (Apple Silicon) / Linux - Python version: 3.12 / 3.13 - Model: google/gemma-4-E4B-it (requires transformers >= 5.5.0) ``` ## Background `google/gemma-4-E4B-it` uses the `gemma4` architecture, which was introduced in transfor...
open
null
false
5
[ "bug" ]
[]
2026-04-11T12:57:06Z
2026-05-12T08:54:19Z
null
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
HARISH-CS-01
53,273,291
MDQ6VXNlcjUzMjczMjkx
User
false
huggingface/transformers
4,245,939,857
I_kwDOCUB6oc79E-aR
45,381
https://github.com/huggingface/transformers/issues/45381
https://api.github.com/repos/huggingface/transformers/issues/45381
transformers==5.3.0, qwen2.5-vl video input vision_position_ids seems to be wrong
### System Info transformers == 5.3.0 [But the bug seems to be with any transformers >= 5.3.0, even in the current main branch] qwen_vl_utils == 0.0.14 Python 3.12.4 Cuda 12.6 ### Who can help? @zucchini-nlp ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An offi...
closed
completed
false
7
[ "bug" ]
[]
2026-04-11T22:43:30Z
2026-04-14T13:02:14Z
2026-04-14T13:02:14Z
NONE
null
20260414T151202Z
2026-04-14T15:12:02Z
bicheng-xu
20,759,548
MDQ6VXNlcjIwNzU5NTQ4
User
false
huggingface/transformers
4,251,348,974
I_kwDOCUB6oc79Zm_u
45,390
https://github.com/huggingface/transformers/issues/45390
https://api.github.com/repos/huggingface/transformers/issues/45390
CLIPTextModel / CLIPVisionModel fail to load old checkpoints after architecture flattening
## Description After the recent refactoring that flattened `CLIPTextModel` (removed the `self.text_model` wrapper) and `CLIPVisionModel` (removed the `self.vision_model` wrapper), old checkpoints that were saved with the nested structure can no longer be loaded correctly. All weights end up randomly initialized becau...
open
reopened
false
2
[]
[]
2026-04-13T04:50:21Z
2026-05-13T08:56:17Z
null
MEMBER
null
20260513T120045Z
2026-05-13T12:00:45Z
sayakpaul
22,957,388
MDQ6VXNlcjIyOTU3Mzg4
User
false
huggingface/transformers
4,253,119,364
I_kwDOCUB6oc79gXOE
45,397
https://github.com/huggingface/transformers/issues/45397
https://api.github.com/repos/huggingface/transformers/issues/45397
[BUG] gemma-4 zero3 from_pretrained
### System Info <img width="946" height="361" alt="Image" src="https://github.com/user-attachments/assets/193f1646-b90a-4d8b-a4ff-db8b252133ef" /> https://github.com/modelscope/ms-swift/issues/9078 google/gemma-4-E4B-it zero2 works fine, zero3 does not. zero2: <img width="876" height="108" alt="Image" src="htt...
closed
completed
false
4
[ "bug" ]
[]
2026-04-13T09:25:04Z
2026-04-17T13:59:12Z
2026-04-17T13:59:12Z
CONTRIBUTOR
null
20260417T180542Z
2026-04-17T18:05:42Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,253,559,298
I_kwDOCUB6oc79iCoC
45,399
https://github.com/huggingface/transformers/issues/45399
https://api.github.com/repos/huggingface/transformers/issues/45399
Fallback to kernels-community/flash-attn2 is blocked by other checks when fa2 is not installed
### System Info - `transformers` version: 5.5.3 - Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31 - Python version: 3.10.0 - Huggingface_hub version: 1.10.1 - Safetensors version: 0.5.2 - Accelerate version: 1.6.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GP...
closed
completed
false
1
[ "bug" ]
[]
2026-04-13T10:20:19Z
2026-04-14T02:16:52Z
2026-04-14T02:16:52Z
CONTRIBUTOR
null
20260414T122001Z
2026-04-14T12:20:01Z
efsotr
104,755,879
U_kgDOBj5ypw
User
false
huggingface/transformers
4,255,127,163
I_kwDOCUB6oc79oBZ7
45,405
https://github.com/huggingface/transformers/issues/45405
https://api.github.com/repos/huggingface/transformers/issues/45405
`MIN_PEFT_VERSION` bumped to 0.18.2 which is not yet released on PyPI
### System Info - `transformers` version: 5.6.0.dev0 (post-commit c585eeaa65) - Platform: Linux-5.14.0-570.12.1.el9_6.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.15 - Huggingface_hub version: 1.9.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: n...
closed
completed
false
3
[ "bug" ]
[]
2026-04-13T13:31:01Z
2026-04-13T13:59:24Z
2026-04-13T13:59:24Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
artem-spector
10,895,869
MDQ6VXNlcjEwODk1ODY5
User
false
huggingface/transformers
4,255,361,302
I_kwDOCUB6oc79o6kW
45,406
https://github.com/huggingface/transformers/issues/45406
https://api.github.com/repos/huggingface/transformers/issues/45406
transformers serve crashes with AttributeError: 'Gemma4Processor' object has no attribute '_tokenizer'
### System Info transformers version: 5.5.3 Python version: 3.12.3 PyTorch version: 2.11.0+cu130 Platform: Linux (Ubuntu 24.04) ### Who can help? @ArthurZucker ? idk I will open an PR once i find time. I "fixed" it locally for now. ### Information - [ ] The official example scripts - [ ] My own modified scripts #...
closed
completed
false
4
[ "bug" ]
[]
2026-04-13T13:57:05Z
2026-04-13T15:42:10Z
2026-04-13T15:42:10Z
NONE
null
20260414T122001Z
2026-04-14T12:20:01Z
asdat3
29,544,382
MDQ6VXNlcjI5NTQ0Mzgy
User
false