repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
4,256,151,805
I_kwDOCUB6oc79r7j9
45,412
https://github.com/huggingface/transformers/issues/45412
https://api.github.com/repos/huggingface/transformers/issues/45412
RT-DETR models do not release memory when deleted / garbage-collected
### System Info Transfomers: 5.5.3 PyTorch: 2.8.0+cu126 TorchVision: 0.23.0+cu126 System: Debian 13 (trixie) Python: 3.13.5 ### Who can help? @yonigozlan @molbap ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder ...
closed
completed
false
8
[ "bug" ]
[]
2026-04-13T15:46:43Z
2026-05-15T20:21:31Z
2026-05-15T20:21:31Z
NONE
null
20260516T000042Z
2026-05-16T00:00:42Z
dhdaines
3,325,008
MDQ6VXNlcjMzMjUwMDg=
User
false
huggingface/transformers
4,258,468,813
I_kwDOCUB6oc790xPN
45,419
https://github.com/huggingface/transformers/issues/45419
https://api.github.com/repos/huggingface/transformers/issues/45419
Chat template inconsistencies in tool-calling support
Chat templates across model families handle tool-calling messages inconsistently. This creates fragility for any library (like TRL) that needs to construct tool-calling conversations programmatically, since there's no single "safe" way to build an assistant message with `tool_calls`. I ran a systematic check across al...
open
null
false
2
[]
[]
2026-04-13T23:27:06Z
2026-04-30T17:11:09Z
null
MEMBER
null
20260501T113108Z
2026-05-01T11:31:08Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
4,262,515,718
I_kwDOCUB6oc7-ENQG
45,431
https://github.com/huggingface/transformers/issues/45431
https://api.github.com/repos/huggingface/transformers/issues/45431
Wrong checkpoint path in Dinov2 model_docs
Wrong checkpoint path in Dinov2 model_docs. The current checkpoint "google/dinov2-base-patch16-224" does not exist. The correct one should be "facebook/dinov-base". This issue is fixed in PR #45430 ### Who can help? @yonigozlan @molbap @stevhliu ### Notes This is a minor issue, but it might help new users. Than...
closed
completed
false
0
[]
[]
2026-04-14T13:53:21Z
2026-04-15T09:02:16Z
2026-04-15T09:02:16Z
CONTRIBUTOR
null
20260415T224019Z
2026-04-15T22:40:19Z
ambroiseodt
64,415,312
MDQ6VXNlcjY0NDE1MzEy
User
false
huggingface/transformers
4,263,833,996
I_kwDOCUB6oc7-JPGM
45,440
https://github.com/huggingface/transformers/issues/45440
https://api.github.com/repos/huggingface/transformers/issues/45440
Native `DeepseekV3MoE` diverges from the remote DeepSeekV3 implementation
### System Info Hello, the `DeepseekV3MoE` class in transformers (native) differs from the official remote DeepSeekV3 implementation (which was updated for a bug but not in `transformers`, hence the difference). <details> <summary> See trf DeepSeekV3 MoE code </summary> https://github.com/huggingface/transformers/bl...
closed
completed
false
0
[ "bug" ]
[]
2026-04-14T17:50:49Z
2026-04-22T04:53:27Z
2026-04-22T04:53:26Z
CONTRIBUTOR
null
20260422T060051Z
2026-04-22T06:00:51Z
casinca
47,400,729
MDQ6VXNlcjQ3NDAwNzI5
User
false
huggingface/transformers
4,266,475,063
I_kwDOCUB6oc7-TT43
45,446
https://github.com/huggingface/transformers/issues/45446
https://api.github.com/repos/huggingface/transformers/issues/45446
Incorrect PyTorch version check for AuxRequest import in flex_attention
### System Info In src/transformers/integrations/flex_attention.py, the code currently checks for PyTorch version >= 2.9.0 to import AuxRequest from torch.nn.attention.flex_attention. However, AuxRequest was actually introduced in PyTorch 2.9.1. According to the official PyTorch documentation, AuxRequest is available ...
closed
completed
false
1
[ "bug" ]
[]
2026-04-15T05:26:22Z
2026-04-15T11:36:37Z
2026-04-15T11:35:34Z
NONE
null
20260415T224019Z
2026-04-15T22:40:19Z
ZSLsherly
142,322,697
U_kgDOCHusCQ
User
false
huggingface/transformers
4,266,677,435
I_kwDOCUB6oc7-UFS7
45,447
https://github.com/huggingface/transformers/issues/45447
https://api.github.com/repos/huggingface/transformers/issues/45447
granitemoehybrid: HybridMambaAttentionDynamicCache missing from modeling_granitemoehybrid — breaks ibm-granite/granite-4.0-3b-vision remote code
## Summary The `ibm-granite/granite-4.0-3b-vision` model's remote `modeling.py` imports `HybridMambaAttentionDynamicCache` from `transformers.models.granitemoehybrid.modeling_granitemoehybrid`. This class does not exist in transformers 5.5.4 (latest) or on the current `main` branch, causing an `ImportError` whenever a...
open
null
false
4
[]
[]
2026-04-15T06:13:12Z
2026-05-15T08:58:11Z
null
NONE
null
20260515T120027Z
2026-05-15T12:00:27Z
Steve-Allison
3,996,420
MDQ6VXNlcjM5OTY0MjA=
User
false
huggingface/transformers
4,268,948,257
I_kwDOCUB6oc7-cvsh
45,458
https://github.com/huggingface/transformers/issues/45458
https://api.github.com/repos/huggingface/transformers/issues/45458
Add typing support incrementally (meta issue)
We’re progressively adding typing support to the codebase using `ty`. This issue tracks the overall progress as we extend coverage directory by directory. # Current status The tooling is already in place. Type checking is enabled for a subset of directories You can run it locally with: ``` make typing ``` # How to...
closed
completed
false
0
[]
[ "tarekziade" ]
2026-04-15T12:36:24Z
2026-04-30T07:55:39Z
2026-04-30T07:55:39Z
MEMBER
null
20260430T120024Z
2026-04-30T12:00:24Z
tarekziade
250,019
MDQ6VXNlcjI1MDAxOQ==
User
false
huggingface/transformers
4,269,023,860
I_kwDOCUB6oc7-dCJ0
45,459
https://github.com/huggingface/transformers/issues/45459
https://api.github.com/repos/huggingface/transformers/issues/45459
`except import_protobuf_decode_error()` hides real tokenizer errors when protobuf isn't installed
### System Info transformers 5.5.4 (latest release) and 5.6.0.dev0 (main). `PreTrainedTokenizerBase._from_pretrained` has `except import_protobuf_decode_error():` at `src/transformers/tokenization_utils_base.py:1919` (line 1933 on main). The helper raises `ImportError` when protobuf isn't installed. The except-class ...
closed
completed
false
4
[ "bug" ]
[]
2026-04-15T12:48:42Z
2026-04-20T12:33:17Z
2026-04-20T12:33:17Z
CONTRIBUTOR
null
20260420T180040Z
2026-04-20T18:00:40Z
jw9603
70,795,645
MDQ6VXNlcjcwNzk1NjQ1
User
false
huggingface/transformers
4,270,313,753
I_kwDOCUB6oc7-h9EZ
45,464
https://github.com/huggingface/transformers/issues/45464
https://api.github.com/repos/huggingface/transformers/issues/45464
chat/completions API fail on Qwen3.5-0.8B for streaming inference
### System Info - `transformers` version: 5.5.0 - 5.5.4 - Platform: macOS-26.4.1-arm64-arm-64bit-Mach-O - Python version: 3.14.3 - Huggingface_hub version: 1.10.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
open
null
false
3
[ "bug" ]
[]
2026-04-15T16:36:31Z
2026-05-16T08:31:39Z
null
NONE
null
20260516T120032Z
2026-05-16T12:00:32Z
zhangwei217245
346,451
MDQ6VXNlcjM0NjQ1MQ==
User
false
huggingface/transformers
4,273,783,730
I_kwDOCUB6oc7-vMOy
45,468
https://github.com/huggingface/transformers/issues/45468
https://api.github.com/repos/huggingface/transformers/issues/45468
[BUG] Gemma-4 Gemma4AudioRelPositionalEncoding
### System Info N/A. ### Who can help? The hard coded numbers **12** and **-1** seem to be related to `attention_context_left` and `attention_context_right`. https://github.com/huggingface/transformers/blob/8426e7e63d49d9c3b5f0c09d43e792a59c75c62c/src/transformers/models/gemma4/modular_gemma4.py#L160 @eustlb @ebez...
open
null
false
5
[ "bug" ]
[]
2026-04-16T06:52:43Z
2026-04-23T14:43:10Z
null
CONTRIBUTOR
null
20260423T180030Z
2026-04-23T18:00:30Z
foldl
4,046,440
MDQ6VXNlcjQwNDY0NDA=
User
false
huggingface/transformers
4,276,519,756
I_kwDOCUB6oc7-5oNM
45,478
https://github.com/huggingface/transformers/issues/45478
https://api.github.com/repos/huggingface/transformers/issues/45478
[BUG] transformers>=5.4.0, Qwen3.5 Moe from_pretrained error
### System Info https://github.com/huggingface/transformers/issues/45310 This issue has not been fixed in the main branch. ``` import os os.environ['CUDA_VISIBLE_DEVICS'] = '0' from transformers import Qwen3_5ForConditionalGeneration, AutoTokenizer model = Qwen3_5ForConditionalGeneration.from_pretrained('Qwen/Qwe...
closed
completed
false
4
[ "Should Fix", "bug" ]
[]
2026-04-16T14:48:15Z
2026-04-20T01:35:12Z
2026-04-20T01:31:00Z
CONTRIBUTOR
null
20260420T060046Z
2026-04-20T06:00:46Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,276,582,143
I_kwDOCUB6oc7-53b_
45,479
https://github.com/huggingface/transformers/issues/45479
https://api.github.com/repos/huggingface/transformers/issues/45479
`problem_type="single_label_classification"` with `num_labels=1` leads to degenerate zero loss across multiple sequence-classification models
### System Info Hi, I found what looks like a library-wide issue in `transformers` affecting multiple `ForSequenceClassification` models, not just ModernBERT. If a model is initialized with: ```python num_labels=1 problem_type="single_label_classification" ``` the forward pass uses `CrossEntropyLoss()` with only one ...
closed
completed
false
3
[ "bug" ]
[]
2026-04-16T14:58:54Z
2026-04-24T16:51:31Z
2026-04-24T16:51:31Z
NONE
null
20260424T180025Z
2026-04-24T18:00:25Z
BohdanBabii
73,220,903
MDQ6VXNlcjczMjIwOTAz
User
false
huggingface/transformers
4,276,916,345
I_kwDOCUB6oc7-7JB5
45,482
https://github.com/huggingface/transformers/issues/45482
https://api.github.com/repos/huggingface/transformers/issues/45482
Gemma4 26B-A4B: cross-device errors with CPU offload (RoPE, inputs, layer_scalar, SDPA mask, mm_token_type_ids)
# Bug: Gemma4 cross-device tensor errors with accelerate CPU offload ## Environment - transformers latest (Gemma4 support, `modeling_gemma4.py`) - Gemma4 26B-A4B-it (MoE, 4B active params) - `accelerate` device_map with CPU offload (layers overflow to RAM) - BnB INT8 + PEFT LoRA + Gradient Checkpointing - RTX 4090 (2...
open
null
false
3
[]
[]
2026-04-16T15:57:28Z
2026-04-20T12:19:31Z
null
NONE
null
20260420T180040Z
2026-04-20T18:00:40Z
sirfyyn
31,549,942
MDQ6VXNlcjMxNTQ5OTQy
User
false
huggingface/transformers
647,983,215
MDU6SXNzdWU2NDc5ODMyMTU=
5,391
https://github.com/huggingface/transformers/issues/5391
https://api.github.com/repos/huggingface/transformers/issues/5391
Training a GPT-2 from scratch in Greek-text, results in a low perplexity score of 7 after 15 epochs. Is it normal that score?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make s...
closed
completed
false
3
[ "wontfix" ]
[]
2020-06-30T08:37:47Z
2026-02-09T16:33:04Z
2020-09-13T17:12:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Nkonstan
35,643,708
MDQ6VXNlcjM1NjQzNzA4
User
false
huggingface/transformers
665,862,946
MDU6SXNzdWU2NjU4NjI5NDY=
6,045
https://github.com/huggingface/transformers/issues/6045
https://api.github.com/repos/huggingface/transformers/issues/6045
Test BART's memory consumption
- this can run on GPU only and be marked `@slow` - check how much memory bart is using at `__init__` - assert that it doesn't use more than 110% of that. - check how much memory bart uses on a single forward pass. (optionally test this in fp16). - assert that it doesn't use more than 110% of that. - check how much...
closed
completed
false
11
[ "Help wanted", "Tests", "Benchmarks", "WIP" ]
[ "stas00", "patrickvonplaten" ]
2020-07-26T21:24:39Z
2026-02-10T13:24:28Z
2026-02-10T13:13:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sshleifer
6,045,025
MDQ6VXNlcjYwNDUwMjU=
User
false
huggingface/transformers
718,927,476
MDU6SXNzdWU3MTg5Mjc0NzY=
7,715
https://github.com/huggingface/transformers/issues/7715
https://api.github.com/repos/huggingface/transformers/issues/7715
examples/rag: test coverage, tiny model
Disclaimer: I don't know this code very well, this may be much harder than it seems. Blocking PR: #7713 [`examples/rag/finetune.py`, `examples/rag/finetune.sh`, `eval_rag.py`] do not seem to be tested at all. It would be good to have a `test_finetune.py` like `examples/seq2seq` that tested these. cc @stas00 ...
closed
completed
false
6
[ "Help wanted", "Tests", "rag", "Feature request" ]
[]
2020-10-11T21:09:58Z
2026-02-10T13:24:36Z
2026-02-10T13:07:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sshleifer
6,045,025
MDQ6VXNlcjYwNDUwMjU=
User
false
huggingface/transformers
2,501,014,784
I_kwDOCUB6oc6VEnUA
33,260
https://github.com/huggingface/transformers/issues/33260
https://api.github.com/repos/huggingface/transformers/issues/33260
Community contribution: Adding GGUF support for more architectures
### Feature request Recently, we have added the ability to load `gguf` files within [transformers](https://huggingface.co/docs/hub/en/gguf). <img src="https://github.com/user-attachments/assets/61df6455-6016-449e-a37f-9dfc7f918902" width="600"> The goal was to offer the possibility to users to further train/fine-tu...
open
null
false
47
[ "Good Second Issue", "Feature request" ]
[]
2024-09-02T13:41:47Z
2026-04-21T04:38:15Z
null
MEMBER
null
20260421T060039Z
2026-04-21T06:00:39Z
SunMarc
57,196,510
MDQ6VXNlcjU3MTk2NTEw
User
false
huggingface/transformers
4,281,338,037
I_kwDOCUB6oc7_MAi1
45,488
https://github.com/huggingface/transformers/issues/45488
https://api.github.com/repos/huggingface/transformers/issues/45488
LlamaTokenizer in v5 overrides tokenizer.json's ByteLevel pre-tokenizer with Metaspace, silently breaks DeepSeek V3/R1 family
### System info - `transformers`: 5.3.0 - `tokenizers`: 0.22.2 - Python: 3.12 / Linux ### Who can help? @ArthurZucker ### Reproduction Tokenizer-only, ~7 MB download: ```python from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3") print(repr(tok.decode(tok.encode("h...
open
null
false
7
[]
[]
2026-04-17T08:50:48Z
2026-05-15T19:17:25Z
null
NONE
null
20260516T000042Z
2026-05-16T00:00:42Z
dc3671
5,948,851
MDQ6VXNlcjU5NDg4NTE=
User
false
huggingface/transformers
4,282,845,832
I_kwDOCUB6oc7_RwqI
45,491
https://github.com/huggingface/transformers/issues/45491
https://api.github.com/repos/huggingface/transformers/issues/45491
[Gemma3] NaN embeddings on GPU when batching sequences of mixed length (sliding window attention + all-padding windows)
### System Info **System Info** - `transformers`: 4.45.1 - `sentence-transformers`: 5.1.2 - `tokenizers`: 0.20.0 - `safetensors`: 0.4.5 - `PyTorch`: ≥ 2.6.0 - Serving runtime: `pytorch/torchserve-kfs:0.12.0` (Python 3.9, Linux x86_64) - GPU: NVIDIA (CUDA, via KServe / TorchServe on Kubernetes) - CPU inference: **not ...
closed
completed
false
8
[ "bug" ]
[]
2026-04-17T13:10:25Z
2026-04-21T10:37:02Z
2026-04-21T10:37:02Z
NONE
null
20260421T120044Z
2026-04-21T12:00:44Z
RiccardoTOTI
43,544,166
MDQ6VXNlcjQzNTQ0MTY2
User
false
huggingface/transformers
4,285,307,501
I_kwDOCUB6oc7_bJpt
45,496
https://github.com/huggingface/transformers/issues/45496
https://api.github.com/repos/huggingface/transformers/issues/45496
Add V-JEPA 2.1 inference support
### Feature request Meta released [V-JEPA 2.1](https://github.com/facebookresearch/vjepa2) on 2026-03-16 with four pretrained video encoders at 384 resolution (ViT-B 80M, ViT-L 300M, ViT-g 1B, ViT-G 2B). The existing `vjepa2` model family in transformers supports V-JEPA 2.0 but not 2.1. V-JEPA 2.1 introduces several ...
open
null
false
0
[ "Feature request" ]
[]
2026-04-17T20:59:54Z
2026-04-17T20:59:54Z
null
NONE
null
20260417T210541Z
2026-04-17T21:05:41Z
davevanveen
25,591,765
MDQ6VXNlcjI1NTkxNzY1
User
false
huggingface/transformers
4,288,654,586
I_kwDOCUB6oc7_n6z6
45,507
https://github.com/huggingface/transformers/issues/45507
https://api.github.com/repos/huggingface/transformers/issues/45507
GraniteMoEHybrid Model Calls Invalid Method
### System Info Linux: Ubuntu 24.04.4 LTS / 6.8.0-107-generic-64k / aarch64 Python: 3.12.12 Transformers: 5.5.4 Cuda: 12.9 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such a...
open
null
false
0
[ "bug" ]
[]
2026-04-18T17:07:36Z
2026-04-18T17:10:09Z
null
NONE
null
20260418T190539Z
2026-04-18T19:05:39Z
rnowling
1,114,888
MDQ6VXNlcjExMTQ4ODg=
User
false
huggingface/transformers
4,291,972,999
I_kwDOCUB6oc7_0k-H
45,517
https://github.com/huggingface/transformers/issues/45517
https://api.github.com/repos/huggingface/transformers/issues/45517
MPS OOM error, finetuning T5Gemma2 with Seq2SeqTrainer
### System Info MacOS M3 24gb ram Tahoe, MPS backend, transformers 5.5.4 (with #45516 applied), torch 2.10.0 ### Who can help? @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, .....
closed
not_planned
false
3
[ "bug" ]
[]
2026-04-19T20:47:00Z
2026-04-20T19:54:57Z
2026-04-20T19:54:57Z
NONE
null
20260421T000044Z
2026-04-21T00:00:44Z
Tokarak
63,452,145
MDQ6VXNlcjYzNDUyMTQ1
User
false
huggingface/transformers
4,292,666,480
I_kwDOCUB6oc7_3ORw
45,518
https://github.com/huggingface/transformers/issues/45518
https://api.github.com/repos/huggingface/transformers/issues/45518
Expose static_graph DDP flag via TrainingArguments
### Feature request Add a `ddp_static_graph: Optional[bool]` field to [`TrainingArguments`](https://github.com/huggingface/transformers/blob/a29df2d916e3b820aecd19d3b5a877abc523ba3c/src/transformers/training_args.py#L1370-L1377) (mirroring the existing `ddp_broadcast_buffers` pattern) and forward it through [`Trainer....
closed
completed
false
0
[]
[]
2026-04-20T02:00:45Z
2026-04-21T12:46:41Z
2026-04-21T12:46:41Z
CONTRIBUTOR
null
20260421T180049Z
2026-04-21T18:00:49Z
KeitaW
8,693,216
MDQ6VXNlcjg2OTMyMTY=
User
false
huggingface/transformers
3,713,335,476
I_kwDOCUB6oc7dVQC0
42,757
https://github.com/huggingface/transformers/issues/42757
https://api.github.com/repos/huggingface/transformers/issues/42757
cannot import name 'is_offline_mode' from 'huggingface_hub'
### System Info - transformers-5.0.0 - huggingface_hub-1.2.1 ``` ImportError: cannot import name 'is_offline_mode' from 'huggingface_hub' (/root/miniconda3/envs/transformers/lib/python3.10/site-packages/huggingface_hub/__init__.py) ``` ### Who can help? _No response_ ### Information - [ ] The official example scri...
closed
completed
false
2
[ "bug" ]
[]
2025-12-10T02:43:43Z
2026-05-04T06:49:09Z
2025-12-10T03:16:05Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
dollarser
28,535,623
MDQ6VXNlcjI4NTM1NjIz
User
false
huggingface/transformers
4,293,027,800
I_kwDOCUB6oc7_4mfY
45,520
https://github.com/huggingface/transformers/issues/45520
https://api.github.com/repos/huggingface/transformers/issues/45520
KeyError: 'flash_attn' in import_utils.py when running on Python 3.13
### System Info ``` Using Python 3.13.12 environment at: ComfyUI-ROCm/.venv Name: accelerate Version: 1.13.0 Location: /home/laichiaheng/ComfyUI-ROCm/.venv/lib/python3.13/site-packages Requires: huggingface-hub, numpy, packaging, psutil, pyyaml, safetensors, torch Required-by: peft --- Name: diffusers Version: 0.37.1 ...
open
null
false
2
[ "bug" ]
[]
2026-04-20T04:08:22Z
2026-04-25T22:37:44Z
null
NONE
null
20260426T000021Z
2026-04-26T00:00:21Z
laichiaheng
9,532,105
MDQ6VXNlcjk1MzIxMDU=
User
false
huggingface/transformers
4,293,135,092
I_kwDOCUB6oc7_5Ar0
45,521
https://github.com/huggingface/transformers/issues/45521
https://api.github.com/repos/huggingface/transformers/issues/45521
T5Gemma2: decoder self-attention fixed 4097-element mask at batch=1, fails on inputs >4094 tokens
### System Info - transformers `5.0.0` (T5Gemma 2 support shipped in #41834) - torch `2.8.0`, CUDA `12.8.1`, Python `3.12` - Hardware: 1× NVIDIA H100 NVL 94 GB (reproduced on same bug under A100 80 GB SXM) - Model: `google/t5gemma-2-4b-4b` (gated) - Base image: `runpod/pytorch:1.0.3-cu1281-torch280-ubuntu2404` ### Wh...
open
null
false
7
[ "Good Second Issue" ]
[]
2026-04-20T04:44:55Z
2026-04-26T04:37:24Z
null
NONE
null
20260426T060015Z
2026-04-26T06:00:15Z
arunkumarchithanar
573,117
MDQ6VXNlcjU3MzExNw==
User
false
huggingface/transformers
4,293,180,071
I_kwDOCUB6oc7_5Lqn
45,522
https://github.com/huggingface/transformers/issues/45522
https://api.github.com/repos/huggingface/transformers/issues/45522
Feature request: Flash Attention 2 support for T5Gemma 2
### Feature request Add Flash Attention 2 support for `T5Gemma2ForConditionalGeneration` (and companion variants: encoder, decoder, etc., wherever `attn_implementation="flash_attention_2"` currently raises). Currently, loading the model with FA2 fails at dispatch time: ```python from transformers import AutoModelFor...
closed
completed
false
5
[]
[]
2026-04-20T04:57:54Z
2026-05-15T13:00:48Z
2026-05-15T13:00:22Z
NONE
null
20260515T180026Z
2026-05-15T18:00:26Z
arunkumarchithanar
573,117
MDQ6VXNlcjU3MzExNw==
User
false
huggingface/transformers
4,294,932,277
I_kwDOCUB6oc7__3c1
45,529
https://github.com/huggingface/transformers/issues/45529
https://api.github.com/repos/huggingface/transformers/issues/45529
Add `Olmo2ForSequenceClassification` (and ideally `OlmoForSequenceClassification` / `Olmo3ForSequenceClassification`)
`AutoModelForSequenceClassification.from_pretrained("allenai/OLMo-2-0425-1B")` currently fails because the OLMo family exposes only `*Model` and `*ForCausalLM`. All peer decoder architectures (Llama, Mistral, Qwen2, Gemma, Falcon, etc.) ship `ForSequenceClassification`. ## Motivation I teach the graduate Applied Deep...
closed
completed
false
2
[]
[]
2026-04-20T10:22:03Z
2026-04-22T14:41:36Z
2026-04-22T14:41:36Z
CONTRIBUTOR
null
20260422T180025Z
2026-04-22T18:00:25Z
earino
3,258
MDQ6VXNlcjMyNTg=
User
false
huggingface/transformers
4,299,297,967
I_kwDOCUB6oc8AAAABAEIUrw
45,536
https://github.com/huggingface/transformers/issues/45536
https://api.github.com/repos/huggingface/transformers/issues/45536
Feature Request: Add SCAO Optimizer integration for 1.5x faster fine-tuning throughput
### Feature request Currently, AdamW is the default standard for fine-tuning via the Trainer class. While robust, its diagonal approximation of loss curvature makes early convergence slow, which is particularly expensive in compute-constrained environments or rapid fine-tuning pipelines (like LoRA/PEFT) where wall-clo...
closed
completed
false
1
[ "Feature request" ]
[]
2026-04-20T23:31:39Z
2026-04-21T12:21:07Z
2026-04-21T12:21:07Z
NONE
null
20260421T180049Z
2026-04-21T18:00:49Z
whispering3
139,091,824
U_kgDOCEpfcA
User
false
huggingface/transformers
4,299,470,795
I_kwDOCUB6oc8AAAABAES3yw
45,538
https://github.com/huggingface/transformers/issues/45538
https://api.github.com/repos/huggingface/transformers/issues/45538
CLIPTokenizer uses 10**30 as `model_max_length`
### System Info ``` transformers==5.5.4 python3.12 ``` vs ``` transformers==4.57.6 python3.12 ``` ### Who can help? @ArthurZucker @Cyrilvallez ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD...
open
null
false
3
[ "bug" ]
[]
2026-04-21T00:19:55Z
2026-04-21T06:57:34Z
null
NONE
null
20260421T120044Z
2026-04-21T12:00:44Z
D1-3105
65,292,437
MDQ6VXNlcjY1MjkyNDM3
User
false
huggingface/transformers
4,300,007,163
I_kwDOCUB6oc8AAAABAEzm-w
45,542
https://github.com/huggingface/transformers/issues/45542
https://api.github.com/repos/huggingface/transformers/issues/45542
Only tensorboard is installed without TensorFlow, causing undefined tf backend error
## System Info - Docker Image: `verlai/verl:vllm018.dev1` - Transformers version: **5.5.4** (manually updated) - Python version: 3.12 - Dependencies: **TensorFlow is NOT installed**, only TensorBoard is installed ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own mod...
closed
completed
false
2
[ "bug" ]
[]
2026-04-21T03:05:22Z
2026-04-23T07:24:15Z
2026-04-23T07:23:23Z
CONTRIBUTOR
null
20260423T120024Z
2026-04-23T12:00:24Z
enze5088
14,285,786
MDQ6VXNlcjE0Mjg1Nzg2
User
false
huggingface/transformers
4,301,312,789
I_kwDOCUB6oc8AAAABAGDTFQ
45,545
https://github.com/huggingface/transformers/issues/45545
https://api.github.com/repos/huggingface/transformers/issues/45545
Transformers is trying to call home despite `local_files_only=True`
### System Info ```log Traceback (most recent call last): File "/home/user/.local/lib/python3.12/site-packages/httpcore/_exceptions.py", line 10, in map_exceptions yield File "/home/user/.local/lib/python3.12/site-packages/httpcore/backends/sync.py", line 94, in connect_tcp sock = socket.create_connection(...
closed
completed
false
9
[ "bug" ]
[]
2026-04-21T08:29:26Z
2026-04-22T18:09:18Z
2026-04-22T17:33:26Z
NONE
null
20260423T000027Z
2026-04-23T00:00:27Z
Sur3
17,153,578
MDQ6VXNlcjE3MTUzNTc4
User
false
huggingface/transformers
4,305,393,654
I_kwDOCUB6oc8AAAABAJ8X9g
45,557
https://github.com/huggingface/transformers/issues/45557
https://api.github.com/repos/huggingface/transformers/issues/45557
Pipeline num_workers runtime default is 0 but documentation states 8
## `num_workers` runtime default does not match documented default of `8` ### Description The docstring for `num_workers` in `build_pipeline_init_args()` states a default of `8`, but the actual runtime fallback in `Pipeline.__call__()` is `0`. ### Steps to Reproduce Inspect `src/transformers/pipelines/base.py`: **...
closed
completed
false
1
[]
[]
2026-04-21T21:30:21Z
2026-04-23T11:34:26Z
2026-04-23T11:34:26Z
NONE
null
20260423T120024Z
2026-04-23T12:00:24Z
Leonater
83,777,478
MDQ6VXNlcjgzNzc3NDc4
User
false
huggingface/transformers
4,306,489,250
I_kwDOCUB6oc8AAAABAK_Pog
45,561
https://github.com/huggingface/transformers/issues/45561
https://api.github.com/repos/huggingface/transformers/issues/45561
[Bug] pytest-xdist workers race on captured_info.txt in patched testing utils
### System Info Transformers=5.6.0.dev0 (local checkout 85099df959); Python=3.13.5; Platform=macOS-26.2-arm64-arm-64bit-Mach-O; pytest=8.4.2; pytest-xdist=3.8.0 ### Who can help? @ydshieh @SunMarc ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially ...
open
null
false
2
[ "bug" ]
[]
2026-04-22T03:12:11Z
2026-05-05T04:23:38Z
null
NONE
null
20260505T060044Z
2026-05-05T06:00:44Z
oleksii-tumanov
6,143,578
MDQ6VXNlcjYxNDM1Nzg=
User
false
huggingface/transformers
4,306,660,481
I_kwDOCUB6oc8AAAABALJsgQ
45,563
https://github.com/huggingface/transformers/issues/45563
https://api.github.com/repos/huggingface/transformers/issues/45563
Paged generate() emits a stale warning for num_return_sequences
### System Info Transformers version: 5.6.0.dev0 Platform: macOS-26.2-arm64-arm-64bit-Mach-O Python version: 3.13.5 (v3.13.5:6cb20a219a8, Jun 11 2025, 12:23:45) [Clang 16.0.0 (clang-1600.0.26.6)] PyTorch version: 2.11.0 CUDA available: False MPS available: True ### Who can help? @cyrilvallez @remi-or ### Informatio...
closed
completed
false
1
[ "bug" ]
[]
2026-04-22T04:16:08Z
2026-04-24T07:59:32Z
2026-04-24T07:59:32Z
NONE
null
20260424T120023Z
2026-04-24T12:00:23Z
oleksii-tumanov
6,143,578
MDQ6VXNlcjYxNDM1Nzg=
User
false
huggingface/transformers
4,308,216,831
I_kwDOCUB6oc8AAAABAMor_w
45,571
https://github.com/huggingface/transformers/issues/45571
https://api.github.com/repos/huggingface/transformers/issues/45571
A nice UX for generating dynamic tensors that break torch.compile/torch.export
### Feature request We have a couple tensors whose construction is highly dynamic and can't be captured while tracing (cu_seqlens, vision(image/video)_cu_seqlens, ... , vision(image/video)_position_ids, ...). I'm opening this issue to track what we are gonna do about them. The two options are either using the processo...
open
null
false
3
[ "Feature request" ]
[]
2026-04-22T10:04:10Z
2026-04-26T21:51:04Z
null
MEMBER
null
20260427T000017Z
2026-04-27T00:00:17Z
IlyasMoutawwakil
57,442,720
MDQ6VXNlcjU3NDQyNzIw
User
false
huggingface/transformers
4,310,708,891
I_kwDOCUB6oc8AAAABAPAymw
45,584
https://github.com/huggingface/transformers/issues/45584
https://api.github.com/repos/huggingface/transformers/issues/45584
Whisper generation fails on empty transcription after align_special_tokens
### System Info - `transformers` version: 5.6.0.dev0 - Platform: Linux-6.17.0-1009-gcp-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.11.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accel...
open
null
false
1
[ "bug" ]
[]
2026-04-22T17:23:03Z
2026-05-07T17:02:24Z
null
NONE
null
20260507T180032Z
2026-05-07T18:00:32Z
ronansgd
63,855,061
MDQ6VXNlcjYzODU1MDYx
User
false
huggingface/transformers
1,708,804,231
I_kwDOCUB6oc5l2kiH
23,354
https://github.com/huggingface/transformers/issues/23354
https://api.github.com/repos/huggingface/transformers/issues/23354
Make it easy to get seperate "prints" for individual runs/ users when using Transformers Agent
### Feature request I have started exploring the new Transformers Agent. And I would like to build a UI to help me speed up the process. I might be running multiple runs in parallel or have multiple users using my application. I would like to be able to stream the information from the run as it arrives. I would l...
closed
completed
false
5
[ "Feature request" ]
[]
2023-05-14T03:31:49Z
2026-04-23T11:39:37Z
2026-04-23T11:39:32Z
NONE
null
20260423T120024Z
2026-04-23T12:00:24Z
MarcSkovMadsen
42,288,570
MDQ6VXNlcjQyMjg4NTcw
User
false
huggingface/transformers
4,312,607,522
I_kwDOCUB6oc8AAAABAQ0rIg
45,588
https://github.com/huggingface/transformers/issues/45588
https://api.github.com/repos/huggingface/transformers/issues/45588
`integrations/flash_attention.py` crashes with `AttributeError` on `s_aux=None` for sink-less models
### System Info - `transformers` version: 5.6.0 - Platform: Linux-6.8.0-1043-nvidia-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.11.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
closed
completed
false
8
[ "bug" ]
[]
2026-04-23T00:37:54Z
2026-05-03T20:01:32Z
2026-04-23T07:32:56Z
CONTRIBUTOR
null
20260504T000033Z
2026-05-04T00:00:33Z
jamesbraza
8,990,777
MDQ6VXNlcjg5OTA3Nzc=
User
false
huggingface/transformers
4,313,864,648
I_kwDOCUB6oc8AAAABASBZyA
45,593
https://github.com/huggingface/transformers/issues/45593
https://api.github.com/repos/huggingface/transformers/issues/45593
D-FINE not using any auxiliary losses when denoising is turned off
### System Info - `transformers` version: 5.7.0.dev0 - Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 1.7.1 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorc...
closed
completed
false
2
[ "bug" ]
[]
2026-04-23T06:01:05Z
2026-04-23T20:07:05Z
2026-04-23T20:07:05Z
CONTRIBUTOR
null
20260424T000039Z
2026-04-24T00:00:39Z
m-matthias
16,415,097
MDQ6VXNlcjE2NDE1MDk3
User
false
huggingface/transformers
2,252,063,300
I_kwDOCUB6oc6GO8JE
30,333
https://github.com/huggingface/transformers/issues/30333
https://api.github.com/repos/huggingface/transformers/issues/30333
If a training job job failed MLFlow will not be reported and MLFlow shows job still running
### System Info - `transformers` version: 4.40.0 - Platform: Linux-6.8.6-arch1-1-x86_64-with-glibc2.39 - Python version: 3.11.7 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.27.0 - PyTorch version (GP?): 2.2.2+rocm5.7 (True) - Jax version: not installed - JaxLib versi...
open
null
false
11
[ "trainer", "Good Second Issue", "Integrations" ]
[]
2024-04-19T04:31:41Z
2026-05-05T22:06:43Z
null
CONTRIBUTOR
null
20260506T000041Z
2026-05-06T00:00:41Z
helloworld1
247,316
MDQ6VXNlcjI0NzMxNg==
User
false
huggingface/transformers
4,315,067,067
I_kwDOCUB6oc8AAAABATKyuw
45,600
https://github.com/huggingface/transformers/issues/45600
https://api.github.com/repos/huggingface/transformers/issues/45600
auto_mappings.py references removed Sam3LiteText configs, breaking CI
### System Info - `transformers` version: 5.6.0.dev0 - Platform: Linux-5.14.0-570.12.1.el9_6.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.15 - Huggingface_hub version: 1.10.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch ve...
closed
completed
false
4
[ "bug" ]
[]
2026-04-23T09:51:06Z
2026-04-23T11:19:27Z
2026-04-23T11:19:27Z
NONE
null
20260423T120024Z
2026-04-23T12:00:24Z
artem-spector
10,895,869
MDQ6VXNlcjEwODk1ODY5
User
false
huggingface/transformers
4,323,449,470
I_kwDOCUB6oc8AAAABAbKafg
45,632
https://github.com/huggingface/transformers/issues/45632
https://api.github.com/repos/huggingface/transformers/issues/45632
`trust_remote_code` cache path collides for local models sharing a leaf directory name
### System Info - transformers: 5.5.3 - huggingface_hub: 1.12.0 - Python: 3.13 - OS: Linux ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own ...
closed
completed
false
1
[ "bug" ]
[]
2026-04-24T14:04:02Z
2026-04-29T11:25:49Z
2026-04-29T11:25:49Z
CONTRIBUTOR
null
20260429T120025Z
2026-04-29T12:00:25Z
nurpax
297,823
MDQ6VXNlcjI5NzgyMw==
User
false
huggingface/transformers
3,448,019,756
I_kwDOCUB6oc7NhJss
41,115
https://github.com/huggingface/transformers/issues/41115
https://api.github.com/repos/huggingface/transformers/issues/41115
Add Model Architecture for MiniCPM3
### Model description This is a feature request to add a new model architecture for MiniCPM3, a powerful, lightweight language model. MiniCPM3 is the third generation of the MiniCPM series. It demonstrates performance comparable to or exceeding many 7B-9B models, despite its smaller size. It excels in both English an...
open
null
false
1
[ "New model" ]
[]
2025-09-24T07:07:47Z
2026-04-24T14:31:38Z
null
CONTRIBUTOR
null
20260424T180025Z
2026-04-24T18:00:25Z
bzantium
19,511,788
MDQ6VXNlcjE5NTExNzg4
User
false
huggingface/transformers
4,323,748,552
I_kwDOCUB6oc8AAAABAbcqyA
45,636
https://github.com/huggingface/transformers/issues/45636
https://api.github.com/repos/huggingface/transformers/issues/45636
Proposal: add sdpa_memeff attn_implementation for shape combinations no fast backend covers
## Summary Proposal to add a new `attn_implementation="sdpa_memeff"` that pins torch's SDPA dispatcher to `SDPBackend.EFFICIENT_ATTENTION` (via `sdpa_kernel([EFFICIENT_ATTENTION])` wrapping the existing `sdpa_attention_forward`). Filing as an issue to validate design direction before opening a PR. ## Motivation — two...
open
reopened
false
6
[]
[]
2026-04-24T14:56:28Z
2026-05-04T14:58:16Z
null
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
dvdimitrov13
60,075,474
MDQ6VXNlcjYwMDc1NDc0
User
false
huggingface/transformers
4,326,688,288
I_kwDOCUB6oc8AAAABAeQGIA
45,644
https://github.com/huggingface/transformers/issues/45644
https://api.github.com/repos/huggingface/transformers/issues/45644
mps: test_eager_matches_sdpa_inference tests fail with PyTorch MPS backend
### System Info - `transformers`: 5.7.0.dev0 (main, c472755e79) - macOS-26.1-arm64, Apple M5, torch 2.11.0 (MPS) - Python 3.12.13 ### Who can help? @Cyrilvallez (MPS counterpart to the XPU branch you added). ### Reproduction ``` TRANSFORMERS_TEST_DEVICE=mps python -m pytest tests/models/{llama,gemma,qwen2,mistral}/t...
closed
completed
false
1
[]
[]
2026-04-25T02:09:42Z
2026-04-27T07:47:25Z
2026-04-27T07:47:24Z
NONE
null
20260427T120026Z
2026-04-27T12:00:26Z
qflen
194,738,340
U_kgDOC5t4pA
User
false
huggingface/transformers
4,327,838,929
I_kwDOCUB6oc8AAAABAfWU0Q
45,646
https://github.com/huggingface/transformers/issues/45646
https://api.github.com/repos/huggingface/transformers/issues/45646
NLP
null
closed
completed
false
1
[]
[]
2026-04-25T10:34:05Z
2026-04-27T09:05:10Z
2026-04-27T09:05:10Z
NONE
null
20260427T120026Z
2026-04-27T12:00:26Z
mariam12-dotcom
239,433,683
U_kgDODkV30w
User
false
huggingface/transformers
4,328,968,742
I_kwDOCUB6oc8AAAABAgbSJg
45,647
https://github.com/huggingface/transformers/issues/45647
https://api.github.com/repos/huggingface/transformers/issues/45647
MusicgenMelody ignores audio conditioning (regression between 4.48 and 4.57)
_(Original Note): Claude removed this line when editing, but I wanted to fully disclose that this issue was discovered and written up by Claude code_ **Update (corrected):** the regression is wider than the title suggests — it already exists in transformers **4.57.6**, the latest 4.x. So this is not a v5 regression; i...
open
null
false
12
[]
[]
2026-04-25T18:58:41Z
2026-05-01T17:10:32Z
null
NONE
null
20260501T180051Z
2026-05-01T18:00:51Z
audiodude
57,832
MDQ6VXNlcjU3ODMy
User
false
huggingface/transformers
4,334,969,822
I_kwDOCUB6oc8AAAABAmJj3g
45,656
https://github.com/huggingface/transformers/issues/45656
https://api.github.com/repos/huggingface/transformers/issues/45656
Optimizer step being called 2 times when using deepspeed
### System Info In version transformers==4.57.3, and deepspeed==0.18.3, in below screenshot, when accelerator.backward is called, the deepspeed backward internally calls engine.step which is performing optimizer step at gradient accumulation step The below snapshot is from trainer.py in transformers library <img w...
open
null
false
0
[ "bug" ]
[]
2026-04-27T10:22:39Z
2026-04-27T10:23:25Z
null
NONE
null
20260427T120026Z
2026-04-27T12:00:26Z
harsh2912
18,512,791
MDQ6VXNlcjE4NTEyNzkx
User
false
huggingface/transformers
4,338,826,538
I_kwDOCUB6oc8AAAABAp09Kg
45,663
https://github.com/huggingface/transformers/issues/45663
https://api.github.com/repos/huggingface/transformers/issues/45663
Gemma-4 training with FSDP2 raises `KeyError` in `Gemma4TextAttention.forward` because `shared_kv_states` is rebuilt per-layer
### System Info - `transformers` version: 5.6.2 - Platform: Linux-6.8.0-1043-nvidia-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.11.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
closed
completed
false
3
[ "bug" ]
[]
2026-04-27T20:55:50Z
2026-05-14T05:58:27Z
2026-05-14T05:58:27Z
CONTRIBUTOR
null
20260514T060044Z
2026-05-14T06:00:44Z
jamesbraza
8,990,777
MDQ6VXNlcjg5OTA3Nzc=
User
false
huggingface/transformers
4,341,660,362
I_kwDOCUB6oc8AAAABAsh6yg
45,674
https://github.com/huggingface/transformers/issues/45674
https://api.github.com/repos/huggingface/transformers/issues/45674
[BitsAndBytesConfig] Providing llm_int8_skip_modules clears the default lm_head exclusion, causing AssertionError in 4-bit inference
## Environment - `transformers`: 5.5.4 - `bitsandbytes`: 0.49.2 - `torch`: 2.11.0+cu126 - CUDA: 12.6 - OS: Windows 11 - GPU: NVIDIA RTX 3090 ## Bug Description When specifying `llm_int8_skip_modules` in `BitsAndBytesConfig`, the default module exclusion list (which normally protects `lm_head` from being quantized) i...
open
null
false
5
[]
[]
2026-04-28T08:27:12Z
2026-04-28T13:03:14Z
null
NONE
null
20260428T180033Z
2026-04-28T18:00:33Z
softguy777
145,181,514
U_kgDOCKdLSg
User
false
huggingface/transformers
4,341,641,436
I_kwDOCUB6oc8AAAABAsgw3A
45,672
https://github.com/huggingface/transformers/issues/45672
https://api.github.com/repos/huggingface/transformers/issues/45672
[Gemma4] torch.finfo() TypeError on uint8 weights in audio modules during 4-bit (NF4) inference
## Environment - `transformers`: 5.5.4 - `bitsandbytes`: 0.49.2 - `torch`: 2.11.0+cu126 - CUDA: 12.6 - OS: Windows 11 - GPU: NVIDIA RTX 3090 ## Bug Description When running `google/gemma-4-e2b-it` with `BitsAndBytesConfig(load_in_4bit=True)`, the forward pass crashes immediately with: ``` TypeError: torch.finfo() r...
open
null
false
1
[]
[]
2026-04-28T08:23:48Z
2026-04-28T11:32:26Z
null
NONE
null
20260428T120019Z
2026-04-28T12:00:19Z
softguy777
145,181,514
U_kgDOCKdLSg
User
false
huggingface/transformers
4,336,103,397
I_kwDOCUB6oc8AAAABAnOv5Q
45,657
https://github.com/huggingface/transformers/issues/45657
https://api.github.com/repos/huggingface/transformers/issues/45657
ValueError in zero_shot_object_detection.md doctest on Python 3.13
While running the test suite on Python 3.13.7, pytest fails to collect tests due to a malformed doctest in docs/source/en/tasks/zero_shot_object_detection.md. Environment: Python: 3.13.7 Platform: macOS (darwin) transformers: main branch Error: ValueError: line 172 of the docstring for zero_shot_object_detection.md ...
closed
completed
false
0
[]
[]
2026-04-27T13:28:53Z
2026-04-28T11:40:48Z
2026-04-28T11:40:48Z
CONTRIBUTOR
null
20260428T120019Z
2026-04-28T12:00:19Z
AnkitAhlawat7742
199,906,670
U_kgDOC-pVbg
User
false
huggingface/transformers
4,341,737,188
I_kwDOCUB6oc8AAAABAsmm5A
45,676
https://github.com/huggingface/transformers/issues/45676
https://api.github.com/repos/huggingface/transformers/issues/45676
Gemma 4: Exploding pre-clip gradient norms during LoRA fine-tuning of `gemma-4-31B-it`
### System Info ## Summary Fine-tuning `google/gemma-4-31B-it` with a small LoRA via standard `transformers.Trainer` + `peft` on a public chat-style dataset produces pre-clip gradient norms that are **1–3 orders of magnitude larger than expected**. With `max_grad_norm=1.0` the actual updates are bounded, but the pre-...
closed
completed
false
9
[ "Good Second Issue", "bug" ]
[]
2026-04-28T08:39:34Z
2026-05-02T15:07:30Z
2026-05-02T15:07:30Z
NONE
null
20260502T180031Z
2026-05-02T18:00:31Z
pritmish
183,807,581
U_kgDOCvSuXQ
User
false
huggingface/transformers
4,343,386,640
I_kwDOCUB6oc8AAAABAuLSEA
45,685
https://github.com/huggingface/transformers/issues/45685
https://api.github.com/repos/huggingface/transformers/issues/45685
[moe] mps interface has error "histogram_mps" not implemented for 'Int'
### System Info # Transformers env info Python version: 3.13.9 os system: macOS-26.4.1-arm64-arm-64bit-Mach-O PyTorch version: 2.11.0 Transformers version: 5.6.2 CUDA : False MPS: True ### Who can help? @cyrilvallez in [moe.py](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/...
closed
completed
false
2
[ "bug" ]
[]
2026-04-28T13:08:21Z
2026-05-05T08:28:03Z
2026-05-05T08:28:03Z
NONE
null
20260505T120036Z
2026-05-05T12:00:36Z
chenzhe1204
38,660,669
MDQ6VXNlcjM4NjYwNjY5
User
false
huggingface/transformers
4,343,349,447
I_kwDOCUB6oc8AAAABAuJAxw
45,684
https://github.com/huggingface/transformers/issues/45684
https://api.github.com/repos/huggingface/transformers/issues/45684
save_pretrained` (with `register_for_auto_class`) propagates read-only permissions from custom-model source files
### System Info - `transformers` version: 5.5.3 - Python: 3.13 - Platform: Linux ### Who can help? @Cyrilvallez (model loading) ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My...
closed
completed
false
1
[ "bug" ]
[]
2026-04-28T13:02:46Z
2026-04-29T11:03:04Z
2026-04-29T11:03:04Z
CONTRIBUTOR
null
20260429T120025Z
2026-04-29T12:00:25Z
nurpax
297,823
MDQ6VXNlcjI5NzgyMw==
User
false
huggingface/transformers
4,348,001,591
I_kwDOCUB6oc8AAAABAyk9Nw
45,693
https://github.com/huggingface/transformers/issues/45693
https://api.github.com/repos/huggingface/transformers/issues/45693
Why the calculation of train_batch_size unrelated to split_batches
In the calculation of `train_batch_size` property in transformers/training_args.py, the formula used is `train_batch_size = self.per_device_train_batch_size * max(1, self.n_gpu)`. When `split_batches` is set to `False`, this is easy to understand: the number of samples on each GPU multiplied by the number of GPUs equa...
open
null
false
2
[]
[]
2026-04-29T04:57:23Z
2026-04-29T08:19:16Z
null
NONE
null
20260429T120025Z
2026-04-29T12:00:25Z
mklpr
7,464,549
MDQ6VXNlcjc0NjQ1NDk=
User
false
huggingface/transformers
4,348,919,946
I_kwDOCUB6oc8AAAABAzdAig
45,696
https://github.com/huggingface/transformers/issues/45696
https://api.github.com/repos/huggingface/transformers/issues/45696
Improving CLI Serving Code Structure with Class-Based FastAPI Patterns
### Feature request This proposal suggests refactoring the CLI serving code (`transformers/cli/serving/server.py`) from a function-based to a class-based architecture, using a pattern that better organizes related endpoints, clarifies resource lifecycle management, and improves long-term maintainability. ### Motivat...
closed
completed
false
1
[ "Feature request", "Code agent slop" ]
[]
2026-04-29T08:16:12Z
2026-04-29T10:12:24Z
2026-04-29T10:11:53Z
NONE
null
20260429T120025Z
2026-04-29T12:00:25Z
HeHongyeFY
44,426,557
MDQ6VXNlcjQ0NDI2NTU3
User
false
huggingface/transformers
4,348,995,412
I_kwDOCUB6oc8AAAABAzhnVA
45,698
https://github.com/huggingface/transformers/issues/45698
https://api.github.com/repos/huggingface/transformers/issues/45698
from_pretrained loads wrong custom module after save_pretrained
### System Info Transformers version: 5.7.0.dev0 Python version: 3.13.3 Platform: Linux-6.17.0-20-generic-x86_64-with-glibc2.39 Machine: x86_64 Note, running `transformers env` fails with: `NameError: name 'CompletionCreateParamsStreaming' is not defined` ### Who can help? @CyrilVallez (model loading) ### Informat...
open
null
false
11
[ "bug" ]
[]
2026-04-29T08:29:41Z
2026-05-06T19:43:50Z
null
CONTRIBUTOR
null
20260507T000029Z
2026-05-07T00:00:29Z
nurpax
297,823
MDQ6VXNlcjI5NzgyMw==
User
false
huggingface/transformers
4,350,467,219
I_kwDOCUB6oc8AAAABA07ckw
45,701
https://github.com/huggingface/transformers/issues/45701
https://api.github.com/repos/huggingface/transformers/issues/45701
transformers version changes the tokenization
### System Info ``` Python: 3.12.3 transformers: 5.7.0 torch : 2.11.0+cu130 sentencepiece: 0.2.1 ``` ________________ - **Platform:** Linux-6.17.0-22-generic-x86_64-with-glibc2.39 - **Python version:** 3.12.3 - **Huggingface_hub version:** 0.36.2 - **Safetensors version:** 0.7.0 - **Accelerate version:** 1.12....
open
null
false
3
[ "bug" ]
[]
2026-04-29T12:19:16Z
2026-05-13T03:23:55Z
null
NONE
null
20260513T060025Z
2026-05-13T06:00:25Z
PiRom1
119,457,355
U_kgDOBx7GSw
User
false
huggingface/transformers
4,352,722,515
I_kwDOCUB6oc8AAAABA3FGUw
45,704
https://github.com/huggingface/transformers/issues/45704
https://api.github.com/repos/huggingface/transformers/issues/45704
T5 silently uses apex.FusedRMSNorm which has a memory leak (NVIDIA/apex#1999)
### System Info - `transformers` version: 4.57.6 - Platform: Linux-6.8.0-100-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - Huggingface_hub version: 0.36.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: 0.18.9 - PyTorch version (accelerator?):...
closed
completed
false
0
[ "bug" ]
[]
2026-04-29T18:16:58Z
2026-05-01T12:05:42Z
2026-05-01T12:05:42Z
NONE
null
20260501T180051Z
2026-05-01T18:00:51Z
dustnehowl
39,877,181
MDQ6VXNlcjM5ODc3MTgx
User
false
huggingface/transformers
4,354,557,717
I_kwDOCUB6oc8AAAABA41HFQ
45,706
https://github.com/huggingface/transformers/issues/45706
https://api.github.com/repos/huggingface/transformers/issues/45706
Hive Civilization — x402-native services for transformers Agents (notification)
Notification post — Hive Civilization runs 52 x402-wired services on Base mainnet (treasury 0x15184bf50b3d3f52b60434f8942b7d52f2eb436e, USDC settlement, x402 spec from coinbase/x402). Why it might be relevant to transformers/Agents: - transformers Agents tools can wrap Hive services (evaluator, classifier, summarizer...
closed
completed
false
0
[]
[]
2026-04-30T00:41:28Z
2026-04-30T10:18:32Z
2026-04-30T10:18:32Z
NONE
null
20260430T120024Z
2026-04-30T12:00:24Z
srotzin
140,019,476
U_kgDOCFiHFA
User
false
huggingface/transformers
4,355,734,326
I_kwDOCUB6oc8AAAABA587Ng
45,710
https://github.com/huggingface/transformers/issues/45710
https://api.github.com/repos/huggingface/transformers/issues/45710
Fix wrong repo link in Dinov2ForImageClassification doc example
The link to the config.json for google/dinov2-base-patch16-224 in the Dinov2ForImageClassification example is returning a 404. It should be updated to the correct path.
closed
completed
false
1
[]
[]
2026-04-30T06:10:00Z
2026-05-01T04:10:22Z
2026-05-01T04:10:22Z
NONE
null
20260501T113108Z
2026-05-01T11:31:08Z
Milan-Bhimani
157,954,157
U_kgDOCWowbQ
User
false
huggingface/transformers
4,356,818,596
I_kwDOCUB6oc8AAAABA6_GpA
45,715
https://github.com/huggingface/transformers/issues/45715
https://api.github.com/repos/huggingface/transformers/issues/45715
PreTrainedTokenizer.convert_ids_to_tokens(skip_special_tokens=True) rebuilds all_special_ids on every iteration of the per-id loop
### System Info - transformers version: 5.3.0 (also reproduces on `main`) - Python: 3.12 - OS: Linux - Affected class: `transformers.tokenization_python.PreTrainedTokenizer` (renamed to `PythonBackend` on `main`) — the slow-tokenizer base class - **Not** affected: `TokenizersBackend` (the fast subclass) — it already h...
closed
completed
false
2
[]
[]
2026-04-30T09:08:07Z
2026-05-04T00:50:43Z
2026-05-04T00:50:43Z
NONE
null
20260504T060033Z
2026-05-04T06:00:33Z
longlee0622
6,110,159
MDQ6VXNlcjYxMTAxNTk=
User
false
huggingface/transformers
4,356,306,059
I_kwDOCUB6oc8AAAABA6f0iw
45,712
https://github.com/huggingface/transformers/issues/45712
https://api.github.com/repos/huggingface/transformers/issues/45712
Six leftover dummy classes in `dummy_pt_objects.py` fail `check_repo.py` and leak into `dir(transformers)` without torch
### System Info transformers 5.7.0.dev0 (main). `utils/check_repo.py` raises on a fresh install without torch: ``` Exception: The following objects are in the public init, but not in the docs: - BeamScorer - ConstrainedBeamSearchScorer - Constraint - ConstraintListState - DisjunctiveConstraint - PhrasalConstra...
closed
completed
false
1
[ "bug" ]
[]
2026-04-30T07:47:24Z
2026-04-30T14:01:31Z
2026-04-30T14:01:31Z
CONTRIBUTOR
null
20260501T113108Z
2026-05-01T11:31:08Z
jw9603
70,795,645
MDQ6VXNlcjcwNzk1NjQ1
User
false
huggingface/transformers
4,359,518,721
I_kwDOCUB6oc8AAAABA9j6AQ
45,727
https://github.com/huggingface/transformers/issues/45727
https://api.github.com/repos/huggingface/transformers/issues/45727
fix(generation): correct spelling mistake in continuous_api docstring
Hi team, While reviewing the continuous batching generation logic for a local deployment, I noticed a minor spelling mistake (`usefull` -> `useful`) in the docstring of `continuous_api.py`. It's a tiny detail, but keeping the core API docs pristine is always good. Here is the patch: ```diff --- a/src/transformers/g...
closed
completed
false
3
[]
[]
2026-04-30T16:03:07Z
2026-05-08T11:14:13Z
2026-05-08T11:14:13Z
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
lohdptm
280,717,817
U_kgDOELtp-Q
User
false
huggingface/transformers
4,358,160,543
I_kwDOCUB6oc8AAAABA8RAnw
45,721
https://github.com/huggingface/transformers/issues/45721
https://api.github.com/repos/huggingface/transformers/issues/45721
Pass library_name="transformers" to HfApi/Hub functions in push paths so commits attribute correctly
Per the repo's agentic contribution policy, opening this issue first to coordinate before sending the PR. ### Summary `transformers` downloads correctly report a User-Agent of `transformers/<version>; python/...; session_id/...` (built by `http_user_agent()` in `src/transformers/utils/hub.py`). Pushes do not — every ...
open
null
false
2
[]
[]
2026-04-30T12:40:51Z
2026-04-30T19:24:43Z
null
MEMBER
null
20260501T113108Z
2026-05-01T11:31:08Z
davanstrien
8,995,957
MDQ6VXNlcjg5OTU5NTc=
User
false
huggingface/transformers
2,566,242,682
I_kwDOCUB6oc6Y9cF6
33,945
https://github.com/huggingface/transformers/issues/33945
https://api.github.com/repos/huggingface/transformers/issues/33945
Automatic dynamic batch size selection for DataCollatorWithFlattening
### Feature request Add a custom (batch index) sampler to automatically determine batch size to a fixed target number of tokens. ### Motivation I'm keen to try out DataCollatorWithFlattening but unsure about how to set batch size, since no padding will be added so the total number of tokens is dynamic. Im a...
closed
completed
false
9
[ "Usage", "Feature request" ]
[]
2024-10-04T12:19:46Z
2026-05-16T02:04:30Z
2026-05-01T12:27:32Z
NONE
null
20260516T060035Z
2026-05-16T06:00:35Z
alex-hh
5,719,745
MDQ6VXNlcjU3MTk3NDU=
User
false
huggingface/transformers
4,184,237,048
I_kwDOCUB6oc75ZmP4
45,160
https://github.com/huggingface/transformers/issues/45160
https://api.github.com/repos/huggingface/transformers/issues/45160
Add AEO quality badge to README
Hi! I'm behind [Clarvia](https://clarvia.art) — an open quality scoring platform for AI tools and MCP servers. **Hugging Face Transformers** is indexed on Clarvia. Embed a live AEO (Agent Experience Optimization) badge in your README: ```markdown [![AEO Score](https://clarvia.art/api/badge/transformers)](https://clar...
closed
completed
false
1
[]
[]
2026-04-01T05:29:50Z
2026-05-10T08:34:07Z
2026-05-10T08:34:07Z
NONE
null
20260510T120026Z
2026-05-10T12:00:26Z
digitamaz
198,436,502
U_kgDOC9Pmlg
User
false
huggingface/transformers
4,363,998,626
I_kwDOCUB6oc8AAAABBB1Vog
45,735
https://github.com/huggingface/transformers/issues/45735
https://api.github.com/repos/huggingface/transformers/issues/45735
Bug with detecting cache positions in sdpa_mask
### System Info Transformers v5.7.0 ### Who can help? @ArthurZucker @Cyrilvallez ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### ...
closed
completed
false
1
[ "bug" ]
[]
2026-05-01T10:42:56Z
2026-05-11T06:23:13Z
2026-05-11T06:23:13Z
CONTRIBUTOR
null
20260511T120023Z
2026-05-11T12:00:23Z
davidmezzetti
561,939
MDQ6VXNlcjU2MTkzOQ==
User
false
huggingface/transformers
4,364,318,314
I_kwDOCUB6oc8AAAABBCI2ag
45,736
https://github.com/huggingface/transformers/issues/45736
https://api.github.com/repos/huggingface/transformers/issues/45736
Please update `tokenizers` version check
### Feature request Several days before, `tokenizers` released a new version of `0.23.1`, but `transformers` doesn't allow us to use it. https://github.com/huggingface/transformers/blob/ecc3d0da0f8e9c2c54676345b816db29f842792a/setup.py#L150 https://github.com/huggingface/transformers/blob/ecc3d0da0f8e9c2c54676345b816...
open
null
false
2
[ "Feature request" ]
[]
2026-05-01T12:15:45Z
2026-05-02T06:40:40Z
null
NONE
null
20260502T120033Z
2026-05-02T12:00:33Z
lalala-233
79,189,174
MDQ6VXNlcjc5MTg5MTc0
User
false
huggingface/transformers
4,371,167,819
I_kwDOCUB6oc8AAAABBIq6Sw
45,750
https://github.com/huggingface/transformers/issues/45750
https://api.github.com/repos/huggingface/transformers/issues/45750
`Qwen3VLVisionPatchEmbed.proj` (`nn.Conv3d` with `stride == kernel`) is ~50,000× slower than equivalent `nn.Linear` on Blackwell + bf16
### System Info ``` transformers version: 5.0.0.dev0 PyTorch: 2.9.0+cu128 CUDA: 12.8 cuDNN: 9.10.0.2 (91002) Python: 3.14.0 flash-attn: 2.8.3 (installed) GPU: NVIDIA GeForce RTX 5090 (Blackwell, compute capability 12.0, sm_120) OS: ...
open
null
false
4
[ "bug" ]
[]
2026-05-03T07:27:10Z
2026-05-13T16:47:11Z
null
NONE
null
20260513T180036Z
2026-05-13T18:00:36Z
WangYuHang-cmd
74,978,107
MDQ6VXNlcjc0OTc4MTA3
User
false
huggingface/transformers
4,371,965,219
I_kwDOCUB6oc8AAAABBJblIw
45,753
https://github.com/huggingface/transformers/issues/45753
https://api.github.com/repos/huggingface/transformers/issues/45753
Qwen3_5 goes into infinite loop for a specific image
### System Info Colab T4 GPU ### Who can help? I tried to run the colab notebooks for qwen3_5 models which are auto-generated. The inference takes forever. I figured out this issue is due to the specific image that is automatically inserted => "https://huggingface.co/datasets/huggingface/documentation-images/resolve...
open
null
false
7
[ "bug" ]
[]
2026-05-03T13:47:50Z
2026-05-13T15:22:02Z
null
CONTRIBUTOR
null
20260513T180036Z
2026-05-13T18:00:36Z
MHRDYN7
113,298,714
U_kgDOBsDNGg
User
false
huggingface/transformers
4,372,666,795
I_kwDOCUB6oc8AAAABBKGZqw
45,758
https://github.com/huggingface/transformers/issues/45758
https://api.github.com/repos/huggingface/transformers/issues/45758
DeepSeek-V4 CSA eager path may not preserve per-query top-k masking for S > 1
### System Info Observed on `main` after DeepSeek-V4 support was added in #45643. Relevant file: - `src/transformers/models/deepseek_v4/modeling_deepseek_v4.py` - `DeepseekV4CSACompressor.forward` - `DeepseekV4Attention.forward` ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts -...
closed
completed
false
3
[]
[]
2026-05-03T18:52:35Z
2026-05-12T09:27:50Z
2026-05-12T09:27:49Z
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
kekmodel
7,728,527
MDQ6VXNlcjc3Mjg1Mjc=
User
false
huggingface/transformers
4,372,671,381
I_kwDOCUB6oc8AAAABBKGrlQ
45,759
https://github.com/huggingface/transformers/issues/45759
https://api.github.com/repos/huggingface/transformers/issues/45759
`AutoModelForCausalLM.from_config` does not unwrap `text_config` for composite Qwen 3.5 and 3.6 multimodal configs
### System Info - `transformers` version: 5.6.2 - Platform: Linux-6.8.0-1043-nvidia-x86_64-with-glibc2.35 - Python version: 3.12.13 - Huggingface_hub version: 1.13.0 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerato...
closed
completed
false
0
[ "bug" ]
[]
2026-05-03T18:54:41Z
2026-05-05T11:23:45Z
2026-05-05T11:23:45Z
CONTRIBUTOR
null
20260505T120036Z
2026-05-05T12:00:36Z
jamesbraza
8,990,777
MDQ6VXNlcjg5OTA3Nzc=
User
false
huggingface/transformers
4,372,754,066
I_kwDOCUB6oc8AAAABBKLukg
45,761
https://github.com/huggingface/transformers/issues/45761
https://api.github.com/repos/huggingface/transformers/issues/45761
Veneto
null
closed
completed
false
0
[]
[]
2026-05-03T19:32:27Z
2026-05-04T08:46:50Z
2026-05-04T08:46:50Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,756,390
I_kwDOCUB6oc8AAAABBKL3pg
45,762
https://github.com/huggingface/transformers/issues/45762
https://api.github.com/repos/huggingface/transformers/issues/45762
Mirko Privitera 30-09-1990
null
closed
completed
false
0
[]
[]
2026-05-03T19:33:28Z
2026-05-04T08:46:55Z
2026-05-04T08:46:55Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,761,179
I_kwDOCUB6oc8AAAABBKMKWw
45,763
https://github.com/huggingface/transformers/issues/45763
https://api.github.com/repos/huggingface/transformers/issues/45763
Ivan privitera 02-09-1993
null
closed
completed
false
0
[]
[]
2026-05-03T19:35:29Z
2026-05-04T08:47:24Z
2026-05-04T08:47:24Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,762,900
I_kwDOCUB6oc8AAAABBKMRFA
45,764
https://github.com/huggingface/transformers/issues/45764
https://api.github.com/repos/huggingface/transformers/issues/45764
Daniele Privitera 14-05-1998
null
closed
completed
false
0
[]
[]
2026-05-03T19:36:14Z
2026-05-04T08:47:19Z
2026-05-04T08:47:19Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,769,780
I_kwDOCUB6oc8AAAABBKMr9A
45,765
https://github.com/huggingface/transformers/issues/45765
https://api.github.com/repos/huggingface/transformers/issues/45765
Erika Privitera 14-08 -2003
null
closed
completed
false
0
[]
[]
2026-05-03T19:39:07Z
2026-05-04T08:47:15Z
2026-05-04T08:47:15Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,771,685
I_kwDOCUB6oc8AAAABBKMzZQ
45,766
https://github.com/huggingface/transformers/issues/45766
https://api.github.com/repos/huggingface/transformers/issues/45766
Verduci Caterina 21-07-1964
null
closed
completed
false
0
[]
[]
2026-05-03T19:40:01Z
2026-05-04T08:47:10Z
2026-05-04T08:47:10Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,772,935
I_kwDOCUB6oc8AAAABBKM4Rw
45,767
https://github.com/huggingface/transformers/issues/45767
https://api.github.com/repos/huggingface/transformers/issues/45767
Privitera Fabrizio 16-12-1967
null
closed
completed
false
0
[]
[]
2026-05-03T19:40:34Z
2026-05-04T08:47:05Z
2026-05-04T08:47:05Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,372,796,632
I_kwDOCUB6oc8AAAABBKOU2A
45,768
https://github.com/huggingface/transformers/issues/45768
https://api.github.com/repos/huggingface/transformers/issues/45768
Macro Privitera Pitbull 24-10-2025
null
closed
completed
false
0
[]
[]
2026-05-03T19:51:36Z
2026-05-04T08:47:00Z
2026-05-04T08:47:00Z
NONE
null
20260504T180032Z
2026-05-04T18:00:32Z
mirko772
207,275,767
U_kgDODFrG9w
User
false
huggingface/transformers
4,381,391,476
I_kwDOCUB6oc8AAAABBSa6dA
45,779
https://github.com/huggingface/transformers/issues/45779
https://api.github.com/repos/huggingface/transformers/issues/45779
[Windows] RTX 5070 Ti (Blackwell sm_120) - setup and deployment notes
### Environment - GPU: NVIDIA GeForce RTX 5070 Ti Laptop GPU (Blackwell, compute capability 12.0) - Driver: 595.79 (CUDA 13.2) - OS: Windows 11 - Python: 3.14 - transformers: [latest] ### Problem transformers on RTX 5070 Ti requires workarounds: - **TORCH_CUDA_ARCH_LIST=12.0** required for Blackwell - Model loading m...
open
null
false
0
[]
[]
2026-05-05T03:54:09Z
2026-05-05T03:54:09Z
null
NONE
null
20260505T060044Z
2026-05-05T06:00:44Z
loongmiaow-pixel
252,396,232
U_kgDODwtCyA
User
false
huggingface/transformers
3,583,802,125
I_kwDOCUB6oc7VnHsN
41,999
https://github.com/huggingface/transformers/issues/41999
https://api.github.com/repos/huggingface/transformers/issues/41999
Add Timestamp Support for Voxtral Models
### Feature request Add support for segment-level timestamps for the Voxtral models (e.g., mistralai/Voxtral-Mini-3B-2507), similar to the existing timestamp functionality available in the Whisper models. ### Motivation According to the [official Mistral documentation on audio transcription](https://docs.mistral.ai/...
open
null
false
9
[ "Feature request" ]
[]
2025-11-03T21:23:01Z
2026-05-05T09:04:47Z
null
NONE
null
20260505T120036Z
2026-05-05T12:00:36Z
juzhxng
22,426,359
MDQ6VXNlcjIyNDI2MzU5
User
false
huggingface/transformers
2,481,366,710
I_kwDOCUB6oc6T5qa2
32,946
https://github.com/huggingface/transformers/issues/32946
https://api.github.com/repos/huggingface/transformers/issues/32946
Support `StaticCache` in assisted generation
Looking for contributions! Assisted generation (or speculative decoding) is a strategy to speed up generation. Using `StaticCache` and `torch.compile` is another strategy to speed up generation. Currently, the two are not compatible. It would be nice to be able to use both at the same time, for maximum speed 😎 ...
closed
completed
false
3
[ "Good Difficult Issue", "Generation", "Cache" ]
[]
2024-08-22T17:34:38Z
2026-05-05T13:02:50Z
2026-05-05T13:02:50Z
CONTRIBUTOR
null
20260505T180037Z
2026-05-05T18:00:37Z
gante
12,240,844
MDQ6VXNlcjEyMjQwODQ0
User
false
huggingface/transformers
4,387,308,095
I_kwDOCUB6oc8AAAABBYECPw
45,796
https://github.com/huggingface/transformers/issues/45796
https://api.github.com/repos/huggingface/transformers/issues/45796
grootN_training
VLA model trainng for academic purposes. Requires Flash Attention 2.
closed
completed
false
0
[]
[]
2026-05-05T21:46:46Z
2026-05-06T10:55:54Z
2026-05-06T10:55:54Z
NONE
null
20260506T120019Z
2026-05-06T12:00:19Z
zeic1
120,256,880
U_kgDOByr5cA
User
false
huggingface/transformers
4,387,748,264
I_kwDOCUB6oc8AAAABBYe5qA
45,797
https://github.com/huggingface/transformers/issues/45797
https://api.github.com/repos/huggingface/transformers/issues/45797
[BUG] HF Hub is DOWN on AWS in Xet mode
### System Info # Xet-backed model download hangs midway for `Qwen/Qwen3.6-35B-A3B` ## Summary Downloading `Qwen/Qwen3.6-35B-A3B` on a p5en.48xlarge in us-east-2 on AWS hangs partway through when using the default Hugging Face download path. Clearing `~/.cache` does not fix it. The evidence suggests the hang is in ...
closed
completed
false
4
[ "bug" ]
[]
2026-05-05T23:41:44Z
2026-05-07T15:29:12Z
2026-05-07T15:29:12Z
NONE
null
20260507T180032Z
2026-05-07T18:00:32Z
michaelroyzen
45,830,328
MDQ6VXNlcjQ1ODMwMzI4
User
false
huggingface/transformers
4,388,500,815
I_kwDOCUB6oc8AAAABBZM1Tw
45,800
https://github.com/huggingface/transformers/issues/45800
https://api.github.com/repos/huggingface/transformers/issues/45800
incompatiblity between torch 2.4.1 and transformers 5.8.0
### System Info torch 2.4.1 transformers 5.8.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Re...
closed
completed
false
1
[ "bug" ]
[]
2026-05-06T03:19:54Z
2026-05-08T11:54:28Z
2026-05-08T11:54:28Z
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
nickjyj
46,984,040
MDQ6VXNlcjQ2OTg0MDQw
User
false
huggingface/transformers
2,054,646,556
I_kwDOCUB6oc56d2sc
28,218
https://github.com/huggingface/transformers/issues/28218
https://api.github.com/repos/huggingface/transformers/issues/28218
Tokenizer adds an additional space after the added token
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow ver...
open
null
false
7
[ "Good Difficult Issue" ]
[]
2023-12-23T03:43:18Z
2026-05-07T10:18:25Z
null
NONE
null
20260507T180032Z
2026-05-07T18:00:32Z
kitkhai
71,968,397
MDQ6VXNlcjcxOTY4Mzk3
User
false
huggingface/transformers
4,390,807,480
I_kwDOCUB6oc8AAAABBbZnuA
45,803
https://github.com/huggingface/transformers/issues/45803
https://api.github.com/repos/huggingface/transformers/issues/45803
[Bug] Bare `except:` in FuyuBatchFeature.convert_to_tensors() swallows KeyboardInterrupt and hides real errors
### System Info transformers: 5.7.0.dev0 (main, commit 8659ae6245) Python: 3.13.7 Platform: Windows 11 AMD64 PyTorch: 2.11.0+cu126 ### Who can help? @yonigozlan @molbap ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` ...
closed
completed
false
3
[ "bug" ]
[]
2026-05-06T11:01:42Z
2026-05-08T09:56:54Z
2026-05-08T09:56:54Z
CONTRIBUTOR
null
20260508T120022Z
2026-05-08T12:00:22Z
Abineshabee
104,718,709
U_kgDOBj3hdQ
User
false
huggingface/transformers
4,393,029,377
I_kwDOCUB6oc8AAAABBdhPAQ
45,810
https://github.com/huggingface/transformers/issues/45810
https://api.github.com/repos/huggingface/transformers/issues/45810
Feature request Add Qwen3_5ForTokenClassification for using as a value model.
### Feature request Add Qwen3_5ForTokenClassification for using as a value model. ### Motivation In verl, they use AutoModelForTokenClassification for loading value model. https://github.com/volcengine/verl/blob/2d6c6dbb39bf846d4ebf98c89fc5b4f49c37dd3d/verl/utils/model.py#L627 so I want to add this for using trans...
open
null
false
3
[ "Feature request" ]
[]
2026-05-06T16:49:05Z
2026-05-07T09:55:56Z
null
NONE
null
20260507T180032Z
2026-05-07T18:00:32Z
han2-l
188,972,972
U_kgDOC0N_rA
User
false
huggingface/transformers
4,393,508,674
I_kwDOCUB6oc8AAAABBd-fQg
45,812
https://github.com/huggingface/transformers/issues/45812
https://api.github.com/repos/huggingface/transformers/issues/45812
`AutoTokenizer` produces wrong token IDs for all Granite models (silent v4→v5 regression)
### System Info - `transformers` version: 5.8.0 (also reproduced on 5.0.0 through 5.7.0) - Platform: Linux-5.14.0-503.11.1.el9_5.x86_64-x86_64-with-glibc2.34 - Python version: 3.12.13 - Huggingface_hub version: 1.14.0 - Safetensors version: 0.7.0 - Tokenizers version: 0.22.2 - PyTorch version: not installed (tokenizer...
open
null
false
2
[ "bug" ]
[]
2026-05-06T18:23:26Z
2026-05-11T19:23:51Z
null
NONE
null
20260512T000156Z
2026-05-12T00:01:56Z
kndtran
19,249,995
MDQ6VXNlcjE5MjQ5OTk1
User
false
huggingface/transformers
2,937,833,531
I_kwDOCUB6oc6vG8g7
36,879
https://github.com/huggingface/transformers/issues/36879
https://api.github.com/repos/huggingface/transformers/issues/36879
Add RF-DETR model
### Model description Hi, Roboflow just released [RF-DETR](https://blog.roboflow.com/rf-detr/#how-to-use-rf-detr), a new object detection model based on DINOv2, Deformable DETR (which are already part of transformers) and LW DETR. ### Open source status - [x] The model implementation is available - [x] The model ...
closed
completed
false
1
[ "New model", "Vision" ]
[]
2025-03-21T09:38:40Z
2026-05-07T06:56:41Z
2026-05-07T06:56:41Z
CONTRIBUTOR
null
20260507T180032Z
2026-05-07T18:00:32Z
sbucaille
24,275,548
MDQ6VXNlcjI0Mjc1NTQ4
User
false
huggingface/transformers
4,397,611,087
I_kwDOCUB6oc8AAAABBh44Tw
45,820
https://github.com/huggingface/transformers/issues/45820
https://api.github.com/repos/huggingface/transformers/issues/45820
DeepSeekV4 的 CSA Indexer 因果 mask
官方 | ✅ 有 mask,Query[t] 只检索 compressed[0..t//ratio] | 因果正确 Transformers | ❌ 无 mask,Query[t] 可检索所有 compressed | 信息泄露 对比官方推理代码的实现,在indexer做检索时,只会检索之前已经压缩过的window,而在transformers中,没有看到这部分实现。
closed
completed
false
6
[]
[]
2026-05-07T09:22:07Z
2026-05-12T09:28:05Z
2026-05-12T09:28:05Z
NONE
null
20260512T120027Z
2026-05-12T12:00:27Z
slZheng077
163,490,356
U_kgDOCb6qNA
User
false
huggingface/transformers
4,398,351,189
I_kwDOCUB6oc8AAAABBimDVQ
45,823
https://github.com/huggingface/transformers/issues/45823
https://api.github.com/repos/huggingface/transformers/issues/45823
Gemma4 PLE device mismatch with `device_map="auto"` during forward
### System Info - `transformers` version: 5.8.0 - Platform: Linux-6.6.122+-x86_64-with-glibc2.35 - Python version: 3.11.13 - Huggingface_hub version: 1.14.0 - Safetensors version: 0.6.2 - Accelerate version: 1.10.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2.1...
closed
completed
false
2
[ "bug" ]
[]
2026-05-07T11:23:10Z
2026-05-07T19:50:11Z
2026-05-07T19:50:11Z
NONE
null
20260508T000035Z
2026-05-08T00:00:35Z
rishon-galileo
203,699,605
U_kgDODCQ1lQ
User
false
huggingface/transformers
2,419,933,304
I_kwDOCUB6oc6QPUB4
32,101
https://github.com/huggingface/transformers/issues/32101
https://api.github.com/repos/huggingface/transformers/issues/32101
Using Trainer + a pretrained tokenizer + 4D attention mask is extremely slow
### System Info transformers 4.41.0 ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give ...
closed
completed
false
6
[ "Good Difficult Issue", "bug" ]
[]
2024-07-19T21:33:09Z
2026-05-12T12:50:44Z
2026-05-12T12:50:44Z
NONE
null
20260512T180027Z
2026-05-12T18:00:27Z
JackCai1206
16,009,360
MDQ6VXNlcjE2MDA5MzYw
User
false
huggingface/transformers
4,402,849,723
I_kwDOCUB6oc8AAAABBm4nuw
45,834
https://github.com/huggingface/transformers/issues/45834
https://api.github.com/repos/huggingface/transformers/issues/45834
Kosmos2.5: index error on long ocr input
### System Info - `transformers` version: 5.8.0 - Platform: Linux-7.0.3-arch1-2-x86_64-with-glibc2.43 - Python version: 3.13.11 - Huggingface_hub version: 1.14.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acceler...
open
null
false
3
[ "bug" ]
[]
2026-05-08T00:29:00Z
2026-05-11T19:17:44Z
null
NONE
null
20260512T000156Z
2026-05-12T00:01:56Z
nunq
46,054,695
MDQ6VXNlcjQ2MDU0Njk1
User
false
huggingface/transformers
1,168,963,497
I_kwDOCUB6oc5FrPep
16,157
https://github.com/huggingface/transformers/issues/16157
https://api.github.com/repos/huggingface/transformers/issues/16157
Implement Maximal Update Parametrization (muP)
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> This request is to open up a discussion on 1) whether it makes sense to implement [Maximal Update Parametrization (abbreviated muP)](http://arxiv.org/abs/220...
open
reopened
false
18
[ "WIP" ]
[]
2022-03-14T22:08:55Z
2026-05-08T16:37:34Z
null
NONE
null
20260508T180023Z
2026-05-08T18:00:23Z
thegregyang
53,244,851
MDQ6VXNlcjUzMjQ0ODUx
User
false
huggingface/transformers
4,404,517,571
I_kwDOCUB6oc8AAAABBoeaww
45,841
https://github.com/huggingface/transformers/issues/45841
https://api.github.com/repos/huggingface/transformers/issues/45841
Embed Agent Friendly Code Score Badge
Hi team — I'm Himanshu, I built Agent Friendly Code, which scores public repos on how legible they are to AI coding agents (clear conventions, docs, tests, build signals — not anything about accepting agent-authored PRs). `transformers` scored 75.9/100 — full breakdown: https://www.agentfriendlycode.com/repo/126 If y...
open
null
false
0
[]
[]
2026-05-08T07:04:06Z
2026-05-08T07:04:06Z
null
NONE
null
20260508T120022Z
2026-05-08T12:00:22Z
hsnice16
56,081,584
MDQ6VXNlcjU2MDgxNTg0
User
false